text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
On 14/09/05, Dhaemon <dhaemon at gmail.com> wrote: > Hi, > Well, it's not so much that the monads are cheating -- sure, in some sense, the IO monad needs special support to actually run an action that's produced when your program starts, but the cheating that I was referring to was a little backdoor to the runtime system called unsafePerformIO :: IO a -> a. When the expression (unsafePerformIO myAction) gets evaluated, myAction is actually run (and performs any side effects it has) and the result returned. You don't use this under any ordinary circumstances in writing a Haskell program. It's there only for those cases where you have an IO action that's going to compute a pure function, and whose side effects are ignorable (that is, you don't mind if they occur randomly and possibly multiple times), and you can't find another way around it -- this occasionally comes up when making Haskell bindings for C libraries for instance. But it's important to notice that for basically all monads other than IO (or those built on top of IO), even "running" the computations to extract values will have no real world side effects, and so they're effectively just another way to think about and write pure functional code. A small example: sums :: (Num a) => [a] -> [a] -> [a] sums xs ys = do x <- xs y <- ys return (x + y) This code in the list monad, will return all possible sums of pairs of elements from the two input lists, and is certainly a pure function. By just changing the type signature a bit (or removing it), this code can be made to do other related things in other monads. For example, with an appropriate binary tree monad, it will form the binary tree which looks like xs with each (Leaf x) replaced with a copy of ys whose leaves have been incremented by x. No cheating is going on here -- it's just the (purely functional) definitions of return and bind (>>=) for different monads, and the translation of the do-notation into returns and binds that makes this possible. - Cale
http://www.haskell.org/pipermail/haskell-cafe/2005-September/011264.html
CC-MAIN-2014-15
en
refinedweb
On 11/17/2011 07:15 PM, Eric Blake wrote: I'll do it, but can we defer this patch for later so it doesn't cause too much churn on all other pending patches (series)?I'll do it, but can we defer this patch for later so it doesn't cause too much churn on all other pending patches (series)?On 11/17/2011 01:11 PM, Stefan Berger wrote:The previous patch extends the priority of filtering rules into negative numbers. We now use this possibility to interleave the jumping into chains with filtering rules to for example create the 'root' table of an interface with the following sequence of rules: The '-p ARP -j ACCEPT' rule now appears between the jumps. Since the 'arp' chain has been assigned priority -700 and the 'rarp' chain -600, the above ordering can now be achieved with the following rule: <rule action='accept' direction='out' priority='-650'> <mac protocolid='arp'/> </rule> This patch now sorts the commands generating the above shown jumps into chains and interleaves their execution with those for generating rules.Overall, it looks like it does what you say. But it may be worth a v6.@@ -2733,15 +2737,22 @@ ebtablesCreateTmpSubChain(virBufferPtr b PRINT_CHAIN(chain, chainPrefix, ifname, (filtername) ? filtername : l3_protocols[protoidx].val); - virBufferAsprintf(buf, + virBufferAsprintf(&buf, + CMD_DEF("%s -t %s -F %s") CMD_SEPARATOR + CMD_EXEC + CMD_DEF("%s -t %s -X %s") CMD_SEPARATOR + CMD_EXECLooks like my comments on v4 4/10 would apply here as well - a patch 11/10 that refactored things to use a shell variable initialized once up front, instead of passing repetitive command names through %s all over the place, might make this generator easier to follow. But not a problem for the context of this patch. This hunk adds 6 printf args, CMD_DEF("%s -t %s -N %s") CMD_SEPARATOR CMD_EXEC "%s" - CMD_DEF("%s -t %s -A %s -p 0x%x -j %s") CMD_SEPARATOR + CMD_DEF("%s -t %s -%%c %s %%s -p 0x%x -j %s") + CMD_SEPARATORand this hunk has no net effect, but generates a string which will be handed as the format string to yet another printf? Wow, that's a bit hard to follow...CMD_EXEC "%s", ebtables_cmd_path, EBTABLES_DEFAULT_TABLE, chain, + ebtables_cmd_path, EBTABLES_DEFAULT_TABLE, chain, + ebtables_cmd_path, EBTABLES_DEFAULT_TABLE, chain, CMD_STOPONERR(stopOnError), @@ -2750,6 +2761,24 @@ ebtablesCreateTmpSubChain(virBufferPtr b CMD_STOPONERR(stopOnError)); + if (virBufferError(&buf) || + VIR_REALLOC_N(tmp, (*nRuleInstances)+1)< 0) { + virReportOOMError(); + virBufferFreeAndReset(&buf); + return -1; + } + + *inst = tmp; + + memset(&tmp[*nRuleInstances], 0, sizeof(tmp[0]));Using VIR_EXPAND_N instead of VIR_REALLOC_N would take care of this memset for you. With the side effect that I need an additional variable count = *nRuleInstances; Converted.. Yes, there are others. I had to convert it to be able to treat the 'jumping into subchains' equivalent to 'normal filtering rules'.Yes, there are others. I had to convert it to be able to treat the 'jumping into subchains' equivalent to 'normal filtering rules'.+ tmp[*nRuleInstances].priority = priority; + tmp[*nRuleInstances].commandTemplate = + virBufferContentAndReset(&buf);...If I followed things correctly, commandTemplate now has exactly two print arguments, %c and %s. But looking further, it looks like you already have other commandTemplate uses just like this. ebiptablesRuleOrderSort(const void *a, const void *b) { + const ebiptablesRuleInstPtr insta = (const ebiptablesRuleInstPtr)a; + const ebiptablesRuleInstPtr instb = (const ebiptablesRuleInstPtr)b; + const char *root = virNWFilterChainSuffixTypeToString( + VIR_NWFILTER_CHAINSUFFIX_ROOT); + bool root_a = STREQ(insta->neededProtocolChain, root); + bool root_b = STREQ(instb->neededProtocolChain, root); + + /* ensure root chain commands appear before all others since + we will need them to create the child chains */ + if (root_a) { + if (root_b) { + goto normal; + } + return -1; /* a before b */ + } + if (root_b) { + return 1; /* b before a */ + } +normal: + return (insta->priority - instb->priority);Refer to my review earlier in the series about adding a comment how priority is in a bounded range, so the subtraction is safe. Done. I guess I didn't pay enough attention when converting the code. Fixed this instance.I guess I didn't pay enough attention when converting the code. Fixed this instance.@@ -3315,8 +3372,11 @@ ebtablesCreateTmpRootAndSubChains(virBuf filter_names[i].key); if ((int)idx< 0) continue; - rc = ebtablesCreateTmpSubChain(buf, direction, ifname, idx, - filter_names[i].key, 1); + priority = virHashLookup(chains, filter_names[i].key);Why do a virHashLookup, when you already have filter_names[i].value? (See, I knew there was a reason to return key/value pairs).
https://www.redhat.com/archives/libvir-list/2011-November/msg00992.html
CC-MAIN-2014-15
en
refinedweb
On Wed, Dec 22, 2010 at 01:32:15AM +0200, Kirill A. Shutemov wrote:> On Mon, Dec 20, 2010 at 09:46:44AM -0500, J. Bruce Fields wrote:> > By the way, was there ever a resolution to Trond's question?:> > > >> > > > "The keyring upcalls are currently initiated through the same> > mechanism as module_request and therefore get started with the> > init_nsproxy namespace. We'd really like them to run inside the> > same container as the process. As part of the same problem,> > there is the issue of what to do with the dns resolver and> > Bryan's new keyring based idmapper code."> > I'm not sure that I understand the problem correctly.> > Currently, idmap uses dentry taken from client's cl_rpcclient->cl_path> (see nfs_idmap_new()). cl_rpcclient (and cl_path) is initialized with> rpcmount resolved against mount namespace of mount process (see> nfs_create_rpc_client()).> I assume it's correct.There's actually two separate sets of idmapper code; look atfs/nfs/idmapper.c, the first part of the file (between #ifdefCONFIG_NFS_USE_NEW_IDMAPPER and #else) is idmapping code that usesrequest_key(). The code you're looking at (including nfs_idmap_new())is later in the file, and deprecated.--b.
http://lkml.org/lkml/2010/12/21/296
CC-MAIN-2014-15
en
refinedweb
> My point was to mainly identify the performance culprits and provide> an direct access to those "namespaces" for performance reasons.> So we all agreed on that we need to do that..After having looked at Eric's patch, I can tell that he does all these dereferences in quite the same amount.For example, lot's of skb->sk->host->...while in OpenVZ it would be econtainer()->... which is essentially current->container->...So until someone did measurements it looks doubtfull that one solution is better than the another.> Question now (see other's note as well), should we provide> a pointer to each and every namespace in struct task.> Seem rather wasteful to me as certain path/namespaces are not> exercise heavily.> Having one object "struct container" that still embodies all> namespace still seems a reasonable idea.> Otherwise identifying the respective namespace of subsystems will> have to go through container->init->subsys_namespace or similar.> Not necessarily bad either..why not simply container->subsys_namespace?Kirill-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
https://lkml.org/lkml/2006/2/8/209
CC-MAIN-2014-15
en
refinedweb
How to: Use Indexed Properties in COM Interop Programming (C# Programming Guide) Indexed properties improve the way in which COM properties that have parameters are consumed in C# programming. Indexed properties work together with other features introduced in Visual C# 2010, such as named and optional arguments, a new type (dynamic), and embedded type information, to enhance Microsoft Office programming. In earlier versions of C#, methods are accessible as properties only if the get method has no parameters and the set method has one and only one value parameter. However, not all COM properties meet those restrictions. For example, the Excel Range property has a get accessor that requires a parameter for the name of the range. In the past, because you could not access the Range property directly, you had to use the get_Range method instead, as shown in the following example. Indexed properties enable you to write the following instead: Similarly, to set the value of the Value property of a Range object in Visual C# 2008 and earlier, two arguments are required. One supplies an argument for an optional parameter that specifies the type of the range value. The other supplies the value for the Value property. Before Visual C# 2010, C# allowed only one argument. Therefore, instead of using a regular set method, you had to either use the set_Value method or a different property, Value2. The following examples illustrate these techniques. Both set the value of the A1 cell to Name. Indexed properties enable you to write the following code instead. You cannot create indexed properties of your own. The feature only supports consumption of existing indexed properties. The following code shows a complete example. For more information about how to set up a project that accesses the Office API, see How to: Access Office Interop Objects by Using Visual C# 2010 Features (C# Programming Guide). // You must add a reference to Microsoft.Office.Interop.Excel to run // this example. using System; using Excel = Microsoft.Office.Interop.Excel; namespace IndexedProperties { class Program { static void Main(string[] args) { CSharp2010(); //CSharp2008(); } static void CSharp2010() { var excelApp = new Excel.Application(); excelApp.Workbooks.Add(); excelApp.Visible = true; Excel.Range targetRange = excelApp.Range["A1"]; targetRange.Value = "Name"; } static void CSharp2008() { var excelApp = new Excel.Application(); excelApp.Workbooks.Add(Type.Missing); excelApp.Visible = true; Excel.Range targetRange = excelApp.get_Range("A1", Type.Missing); targetRange.set_Value(Type.Missing, "Name"); // Or //targetRange.Value2 = "Name"; } } }
http://msdn.microsoft.com/en-us/library/ee310208(VS.100).aspx
CC-MAIN-2014-15
en
refinedweb
In today’s Programming Praxis exercise, our goal is to simulate a Galton board and display the frequencies of the different bins as a histogram. Let’s get started, shall we? Some imports: import Control.Applicative import Control.Monad import System.Random When dropping a single marble we simulate a number of coin tosses to see which bin the marble ends up in. marble :: Int -> IO Int marble bins = sum . take (bins - 1) . randomRs (0, 1) <$> newStdGen When dropping multiple marbles we count how often each bucket is hit. marbles :: Num a => Int -> Int -> IO [a] marbles n bins = flip fmap (replicateM n $ marble bins) (\results -> map (fromIntegral . length . flip filter results . (==)) [0..bins - 1]) Displaying the histogram is a matter of scaling the values and printing the appropriate amount of asterisks. You can choose the horizontal scaling to get more or less detailed results. histogram :: RealFrac a => a -> [a] -> IO () histogram w cols = mapM_ (\n -> putStrLn $ replicate (ceiling $ n * w / maximum cols) '*') cols The program itself consists of printing the results of the simulation as a histogram. main :: IO () main = histogram 20 =<< marbles 1000 8 Tags: bonsai, code, galton, Haskell, histogram, kata, praxis, programming
http://bonsaicode.wordpress.com/2012/04/10/programming-praxis-galton/
CC-MAIN-2014-15
en
refinedweb
Styleguides are handy because they document conventions for projects; making communication easier across the team whether it’s between PM and dev, client and consultant, or front-endand back-end. Transitions are a little easier with declared standards already in place (pair rotation, onboarding new hires, transfer of responsibilities). From a technical perspective, styleguides keep the front end DRY with project standards and decrease the frequency of visual inconsistencies. Despite the benefits styleguides offer, the rote documentation of CSS isn’t exactly an inspiring challenge. So the team at Trulia made a gem called Hologram that parses and documents CSS for you. The setup isn’t always straightforward; but after configuring Hologram with some help from Guard, no additional work is needed to maintain the styleguide. In this post, we’ll walk through the steps of creating the setup found at Overview Hologram parses your assets (sass, less, css, md, styl, js) for comments of the following format and generates an .html file for each category of component using the _header.html and _footer.html partials Hologram provides. /*doc --- title: Alert name: alert category: alerts --- ```html_example <div class='alert'>Hello</div> ``` */ .alert{ color: blue; } From the comments above, Hologram will create a file called alerts.html that has one component, .alert, and inserts an html snippet demonstrating its usage. The stock output is pretty bare-bones; so we wanted to give it some styling. This is where Rails and its asset pipeline became too helpful–it takes some manual configuration to sequester the styleguide styles so that they don’t pollute the overall app styling. We chose to remove require_tree from the manifest files and include components individually. Here are step-by-step instructions. Installation Gems In Gemfile, add gem 'hologram', github: 'trulia/hologram' (At the time of writing, it’s necessary to specify the github repo because it’s ahead of the published gem and allows some functionality that we later leverage: using categories in the _header partial.) Hologram doesn’t watch your stylesheets for changes, so to avoid having to run it manually after changes, use Guard: In Gemfile, add gem 'guard-hologram', github: "kmayer/guard-hologram", require: false (Have to specify repo because the gem wasn’t pushed.) bundle to install the gems. Hologram bundle exec hologram init will generate hologram_config.yml in the project root. This file contains five settings: - source: This is where you specify the assets that will be documented by Hologram. It’s recursive; and for standard Rails apps, it makes sense to just list ./app/assets - destination: This is the directory that Hologram outputs to. We use ./public/styleguide. - documentation_assets: This is where Hologram reads from in order compile styleguide files. (Most important are the _header.html and _footer.html partials that are generated after hologram init.) We left this at the default ./doc_assets. - dependencies: Left this one blank and included assets in source. - index: Hologram will create an .html file for each category of components documented; setting an index just tells Hologram which category to use as the index. Left it at the default basics. Guard-Hologram Shell guard init, which will add Hologram to your Guardfile. Change the Hologram part of the Guardfile to be ./Guardfile guard "hologram", config_path: "hologram_config.yml" do watch(%r{app/assets/stylesheets/.*css}) watch(%r{app/assets/javascripts/.*js}) watch(%r{doc_assets/*}) watch('hologram_config.yml') end Now Guard will recompile your styleguide whenever the components in app/assets are changed once you run Guard. Creating the Styleguide Using Hologram The syntax Hologram parses in order to include components in the styleguide is /*doc --- title: Alert name: alert category: basics --- ```html_example <div class='alert'>Hello</div> ``` */ .alert{ color: blue; } This tells Hologram what title to list the component under (“Alert”), the name used to apply it (“alert”), and the category it falls in (“basics”). Initial Run bundle exec guard to start watching your assets for changes. Now you won’t have to worry about manually running Hologram all the time. To see it work, document one element of your styles in the above format, then wait for Guard to run to see what Hologram puts together in ./public/styleguide. According to the configuration from above, Hologram will construct files for each category, plus an index, using ./doc_assets/_header.html and ./doc_assets/_.footer.html and place it in ./public/styleguide. If you copy/paste the above example, you’ll get a file called basics.html and a file called index.html in ./public/styleguide; both will have the same content since in ./hologram_config.yml we set index to point to basics. rails s, and you’ll see that localhost:3000/styleguide directs you to the constructed styleguide! That’s awesome, but since the team at Trulia wanted you to be able to customize the look of your styleguide, they left it pretty bare. Let’s make it look better. Customizing the Styleguide We wanted to mimic the styling demo-ed here. To get there, we need to make the following changes: - Add categories to a top-level navigation bar - Add styling to color code snippets - Add some styling to the styleguide elements (but not let these confound the overall app’s styles). The first two are easy to accomplish; just grab the code from Trulia’s Hologram Example. The last point we achieve by changing the application’s CSS manifest to include files individually and creating a styleguide.css.scss that holds styleguide-specific styles. Your styles should be namespaced anyway, but this approach makes sure there is no overlap. Getting Trulia’s Styles Grab docs.css and github.css as well as screen.css and put them all in ./app/assets/stylesheets. The first two are style_guide-specific styling (github.css colors code fragments, and doc.css styles Hologram-generated classes like container. screen.css will serve as the styles for the app itself. In ./doc_assets/_header.html, replace the code with the below code snippet, which is adapted from Trulia’s Hologram Example. The modifications made were: - changing stylesheet and script inclusion to require files compile by the Rails asset pipeline - adding a loop to auto-generate rather than hard-code links to each category. (Dynamically creating category links is where using the Hologram gem from the Github repo comes into play–in the version pushed to RubyGems, we don’t have access to categories in partials.) ./doc_assets/_header.html <!doctype html> <html> <head> <title>HologramExample</title> <link rel=stylesheet <link rel=javascript </head> <header class="header pbn" role="banner"> <div class="backgroundHighlight typeReversed1"> <div class="container"> <h1 class="h2 mvs">My Styleguide</h1> </div> </div> <div class="backgroundLowlight typeReversed1"> <div class="container"> <span> <ul class="docNav listInline"> <% for c in categories %> <li><a href="/styleguide/<%= c[0] %>.html"><%= c[0] %></a></li> <% end %> </ul> </span> </div> </div> </header> <div class="content"> <section> <div class="line"> <div class="col cols4"> <div class="componentMenu box boxBasic backgroundBasic"> <div class="boxBody pan"> <ul class="componentList listBorderedHover"> <% blocks.each do |block| %> <li><a href="#<%= block[:name] %>"><%= block[:title] %></a></li> <% end %> </ul> </div> </div> </div> <div class="main col cols20 lastCol"> You’ll notice that the header leaves a few tags open. Close them in ./doc_assets/_footer.html: </div> </div> </section> </div> </body> </html> Fire up rails s and check it out at /styleguide. It should index to whatever you specified earlier in hologram_config.yml. This is great, but there’s one concern: the styleguide-specific styling (docs.css, github.css) are compiled into application.css per default Rails asset pipeline behavior, which means that it could leak into your core application styles if you’re not careful about namespacing CSS classes. Isolating Styleguide Styles To ensure that the styleguide styles don’t pollute the rest of the app’s stylesheets, rip out the default require_tree from ./app/assets/stylesheets/application.css and add each .css file manually. This allows you to specify only the application’s styles, omitting the styleguide styles. In this example, the application’s styles are contained in alert.css.scss, bubble.css.scss, tooltip.css.scss, and screen.css; so we get this: ./app/assets/stylesheets/application.css.scss @import 'alert'; @import 'bubble'; @import 'tooltip'; @import 'screen'; Now we need to set up a styleguide.css that will be used only by the styleguide. Add a styleguide.css.scss to ./app/assets/stylesheets and include the styleguide-specific styles. In the example, it looks like this: ./app/assets/stylesheets @import 'docs'; @import 'github'; Back in the styleguide header, add ./doc_assets/_header.html <link rel=stylesheet to the stylesheet inclusion to grab the new styleguide.css. This applies to all styleguides since this header will be used by all files generated by Hologram. One last step is required to make the styleguide styles available in production. We need to tell Rails to precompile styleguide.css.scss since it’s not part of application.css.scss; otherwise, nothing in styleguide.css.scss will be available in production. ./config/environments/production.rb config.serve_static_assets = true config.assets.precompile += %w( styleguide.css )
http://pivotallabs.com/author/jchou/
CC-MAIN-2014-15
en
refinedweb
Can edi How to run Servlet program in IBM Websphere Hi Friends.... I am fresher in java platform... I know little bit knowledge about web application java packages like jsp, servlet, etc... Now, I want to help from your side.... If you know , how to run servlet program in IBM Websphere.... web application and client server application plz tell me the exact meaning of client server application and web application. Kindly do the needfuul Thanking you i need help i some one can tell me about history and evolution of client/server and web application. and multilevel architecture if anyone can have materials please tell me servlet i like java programming...... Need Programs I Need Servlet Programs Login Page Please help me to create login page with session by using servle sevlet I am fresher in java platform... I know little bit knowledge about web application java packages like jsp, servlet, etc... Now, I want to help from your side.... If you know , how to run servlet program in IBM Websphere.... Please explain wrong information given... check this sentence written above "Unlike applets they do not require support for java in the web browser." Sir Applet need java supported browers... opinion on site i found it quite interesring and each topic is given in detail and clearly thank u a lot for your work. history of servlets your description about servlets is very useful to us thanking you IIMT how to run java program on websphere servlets applications the default time-out value for session variable is 20 minutes, which can...what are sessions in servlets what are sessions in servlets  ...: servlets why it is declared as abstract class? what benifits we can get by declaring like this?(i.e, with out containing the abstract methods, declaring a class... except return an error statuscode to the client. Programmers can override servlets servlets how can I run java servlet thread safety program using... { res.setContentType(?text/html?); printWriter out=res.getWriter(); out.println(?<...?): PrintWriter out=res.getWriter(); if(req.getParameter(?withdraw?)!=null) { if(amt> Servlets { res.setContentType("text/html"); int count=0; PrintWriter out... the solution for this problem.And how can we deploy the servlet in Tomcat  can u plz explain the http request methods - JSP-Servlet can u plz explain the http request methods can u plz explain http... for HTTP servlets. The servlet container creates an HttpServletRequest object... duplicate entries in the MySQL database we can follow one of these following approach jsp and servlets . Developing a website is generic question you may have to find out the usage , maintainablity , technical expertize and then you can go for MVC I or II architecture where u can use Struts, Spring framework. Chandraprakash Sarathe plz help me any one as fast as u can plz help me any one as fast as u can A thief Muthhooswamy planned... is given in an array. Write a program to find out the total number of jumps he... is the number of metres he can jump (1<=climbUp<=10^10) climbDown Java.. pls. - Java Beginners Java.. pls. Hi, sir. it compiler but when search or purchase it didt go true. Sir, can u help me sir. Please Sir.. Import java.io.*; public class nathan{ String title[]={"Biography","Computers","Science","Sports
http://roseindia.net/tutorialhelp/allcomments/3110
CC-MAIN-2014-15
en
refinedweb
Converting Unix Time given in string to QDateTime From Nokia Developer Wiki The code snippet shows how we can convert a unix time value stored in a QString to a corresponding value in QDateTime. Article Metadata Headers required #include <QDateTime> Source Postconditions The unix time value got successfully converted to : 07 Dec 2011 16:30:00 Vineet.jain - Compatibility Though i have tested it using Nokia Qt SDK 1.1.3 & on devices N9/N950, but i am pretty sure that this code snippet is compatible with previous Qt SDK versions & platforms as well. ThanksVineet vineet.jain 11:55, 15 December 2011 (EET) Hamishwillee - Nice Hi Vineet Thanks for clarification. I've added "|platform=Qt, All versions" to capture the compatibility with Symbian and MeeGo. Two notes for your future articles: RegardsHamish hamishwillee 02:03, 16 December 2011 (EET)
http://developer.nokia.com/community/wiki/Converting_Unix_Time_given_in_string_to_QDateTime
CC-MAIN-2014-15
en
refinedweb
MotivationEdit The UK shipping forecast is prepared by the UK met office 4 times a day and published on the radio, the Met Office web site and the BBC web site. However it is not available in a computer readable form. Tim Duckett recently blogged about creating a Twitter stream. He uses Ruby to parse the text forecast. The textual form of the forecast is included on both the Met Office and BBC sites. However as Tim points out, the format is designed for speech, compresses similar areas to reduce the time slot and is hard to parse. The approach taken here is to scrape a JavaScript file containing the raw area forecast data. ImplementationEdit DependanciesEdit eXist-db ModulesEdit The following scripts use these eXist modules: - request - to get HTTP request parameters - httpclient - to GET and POST - scheduler - to schedule scrapping tasks - dateTime - to format dateTimes - util - base64 conversions - xmldb - for database access OtherEdit - UK Met office web site Met Office pageEdit The Met office page shows an area-by-area forecast but this part of the page is generated by JavaScript from data in a generated JavaScript file. In this file, the data is assigned to multiple arrays. A typical section looks like // Bailey gale_in_force[28] = "0"; gale[28] = "0"; galeIssueTime[28] = ""; shipIssueTime[28] = "1725 Sun 06 Jul"; wind[28] = "Northeast 5 to 7."; weather[28] = "Showers."; visibility[28] = "Moderate or good."; seastate[28] = "Moderate or rough."; area[28] = "Bailey"; area_presentation[28] = "Bailey"; key[28] = "Bailey"; // Faeroes ... Area ForecastEdit JavaScript conversionEdit This function fetches the current JavaScript data using the eXist httpclient module, converts the base64 data to a string, picks out the required area data and parses the code to generate an XML structure using the JavaScript array names. declare namespace httpclient = ""; declare function met:get-forecast($area as xs:string) as element(forecast)? { let $jsuri := "" (: fetch the javascript source and locate the text of the body of the response :) let $base64:= httpclient:get(xs:anyURI($jsuri),true(),())/httpclient:body/text() (: this is base64 encoded , so decode it back to text :) let $js := util:binary-to-string($base64) (: isolate the section for the required area, prefixed with a comment let $areajs := normalize-space(substring-before( substring-after($js,concat("// ",$area)),"//")) return if($areajs ="") (: area not found :) then () else (: build an XML element containing elements for each of the data items, using the array names as the element names :) <forecast> { for $d in tokenize($areajs,";")[position() < last()] (: JavaScript statements terminated by ";" - ignore the last empty :) let $ds := tokenize(normalize-space($d)," *= *") (: separate the LHS and RHS of the assignment statement :) return element {replace(substring-before($ds[1],"["),"_","")}(: element name is the array name, converted to a legal name :) {replace($ds[2],'"','')} (: element text is the RHS minus quotes :) } </forecast> }; For example, the output for one selected area is: <forecast> <galeinforce>0</galeinforce> <gale>0</gale> <galeIssueTime/> <shipIssueTime>0505 Mon 07 Jul</shipIssueTime> <wind>Northwest backing west 5 to 7.</wind> <weather>Squally showers.</weather> <visibility>Moderate or good.</visibility> <seastate>Moderate or rough.</seastate> <area>Fastnet</area> <areapresentation>Fastnet</areapresentation> <key>Fastnet</key> </forecast> Format the forecast as textEdit The forecast data needs to be formatted into a string: declare function met:forecast-as-text($forecast as element(forecast)) as xs:string { concat( $forecast/weather, " Wind ", $forecast/wind, " Visibility ", $forecast/visibility, " Sea ", $forecast/seastate ) }; Area ForecastEdit Finally these functions can be used in a script which accepts a shipping area name and returns an XML message: import module namespace {met:forecast-as-text($forecast)} </message> Message abbreviationEdit To create a message suitable for texting (160 characters), or tweeting (140 character limit), the message can compressed by abbreviating common words. Abbreviation dictionaryEdit A dictionary of words and abbreviations is created and stored locally. The dictionary has been developed using some of the abbreviations in Tim Duckett's Ruby implementation. <dictionary> <entry full="west" abbrev="W"/> <entry full="westerly" abbrev="Wly"/> .. <entry full="variable" abbrev="vbl"/> <entry full="visibility" abbrev="viz"/> <entry full="occasionally" abbrev="occ"/> <entry full="showers" abbrev="shwrs"/> </dictionary> Abbreviation functionEdit The abbreviation function breaks down the text into words, replaces words with abbreviations and builds the text up again: declare function met:abbreviate($forecast as xs:string) as xs:string { string-join( (: lowercase the string, append a space (to ensure a final . is matched) and tokenise :) for $word in tokenize(concat(lower-case($forecast)," "),"\.? +") return (: if there is an entry for the word , use its abbreviation, otherwise use the unabbreviated word :) ( /dictionary/entry[@full=$word]/@abbrev,$word) [1] , " ") (: join the words back up with space separator :) }; Abbreviated MessageEdit import module namespace {met:abbreviate(met:forecast-as-text($forecast))} </message> All Areas forecastEdit This function is an extension of the area forecast. The parse uses the comment separator to break up the script, ignores the first and last sections and the area name in the comment declare function met:get-forecast() as element(forecast)* { let $jsuri := "" let $base64:= httpclient:get(xs:anyURI($jsuri),true(),())/httpclient:body/text() let $js := util:binary-to-string($base64) for $js in tokenize($js,"// ")[position() > 1] [position()< last()] let $areajs := concat("gale",substring-after($js,"gale")) return <forecast> { for $d in tokenize($areajs,";")[position() < last()] let $ds := tokenize(normalize-space($d)," *= *") return element {replace(substring-before($ds[1],"["),"_","")} {replace($ds[2],'"','')} } </forecast> }; XML version of forecastEdit This script returns the full Shipping forecast in XML: import module namespace met = "" at "met.xqm"; <ShippingForecast> {met:get-forecast()} </ShippingForecast> RSS version of forecastEdit XSLT would be suitable for transforming this XML to RSS format ... SMS serviceEdit One possible use of this data would be to provide an SMS on-request service, taking an area name and returning the abbreviated forecast. The complete set of forecasts are created, and the one for the area supplied as the message selected and returned as an abbreviated message. import module namespace met = "" at "met.xqm"; let $area := lower-case(request:get-parameter("text",())) let $forecast := met:get-forecast()[lower-case(area) = $area] return if (exists($forecast)) then concat("Reply: ", met:abbreviate(met:forecast-as-text($forecast))) else concat("Reply: Area ",$area," not recognised") The calling protocol is determined here by the SMS service installed at UWE and described here CachingEdit Fetching the JavaScript on demand is neither efficient nor acceptable net behaviour, and since the forecast times are known, it is preferable to fetch the data on a schedule, convert to the XML form and save in the eXist database and then use the cached XML for later requests. Store XML forecastEdit import module namespace {$forecast} </ShippingForecast> ) return <result> Shipping forecast for {string($forecastDateTime)} stored in {$store} </result> else () The timestamp used on the source data is converting to an xs:dateTime for ease of later processing. declare function met:timestamp-to-xs-date($dt as xs:string) as xs:dateTime { (: convert timestamps in the form 0505 Tue 08 Jul to xs:dateTime :) let $year := year-from-date(current-date()) (: assume the current year since none provided :) let $dtp := tokenize($dt," ") let $mon := index-of(("Jan","Feb", "Mar","Apr","May", "Jun","Jul","Aug","Sep","Oct","Nov","Dec"),$dtp[4]) let $monno := if($mon < 10) then concat("0",$mon) else $mon return xs:dateTime(concat($year,"-",$monno,"-",$dtp[3],"T",substring($dtp[1],1,2),":",substring($dtp[1],3,4),":00")) }; Reducing the forecast dataEdit The raw data contains redundant elements (several versions of the area name) and elements which are normally empty (all gale related elements when no gale warning) but lacks a case-normalised area name as a key. The following function performs this restructuring: declare function met:reduce($forecast as element(forecast)) as element(forecast) { <forecast> { attribute area {lower-case($forecast/area)}} { $forecast/* [not(name(.) = ("shipIssueTime","area","key"))] [ if (../galeinforce = "0" ) then not(name(.) = ("galeinforce","gale","galeIssueTime")) else true() ] } </forecast> }; There would be a case to make for using XSLT for this transformation. The caching script applies this transformation to the forecast before saving. SMS via cacheEdit The revised SMS script can now access the cache. First a function to get the stored forecast: declare function met:get-stored-forecast($area as xs:string) as element(forecast) { doc("/db/Wiki/Met/Forecast/shippingForecast.xml")/ShippingForecast/forecast[@area = $area] }; import module namespace met = "" at "met.xqm"; let $area := lower-case(normalise-space(request:get-parameter("text",()))) let $forecast := met:get-stored-forecast($area) return if (exists($forecast)) then concat("Reply: ", datetime:format-dateTime($forecast/../@at,"HH:mm")," ",met:abbreviate(met:forecast-as-text($forecast))) else concat("Reply: Area ",$area," not recognised") In this script, the selected forecast for the input area extracted by the met function call is a reference to the database element, not a copy. Thus it is still possible to navigate back to the parent element containing the timestamp. The eXist datetime functions are wrappers for the Java class java.text.SimpleDateFormat which defines the date formatting syntax. Job schedulingEdit eXist includes a scheduler module which is a wrapper for the Quartz scheduler. Jobs can only be created by a DBA user. For example, to set a job to fetch the shipping forecast on the hour, let $login := xmldb:login( "/db", "admin", "admin password" ) let $job := scheduler:schedule-xquery-cron-job("/db/Wiki/Met/getandsave.xq" , "0 0 * * * ?") return $job where "0 0 * * * ?" means to run at 0 seconds, 0 minutes past every hour of every day of every month, ignoring the day of the week. To check on the set of scheduled jobs, including system schedule jobs: let $login := xmldb:login( "/db", "admin", "admin password" ) return scheduler:get-scheduled-jobs() It would be better to schedule jobs on the basis of the update schedule for the forecast. These times are 0015, 0505, 1130 and 1725. These times cannot be fitted into a single cron pattern so multiple jobs are required. Because jobs are identified by their path, the same url cannot be used for all instances, so a dummy parameter is added. Discussion The times are one minute later than the published times. This may not be enough slack to account for discrepancies in timing on both sides. Clearly a push from the UK Met Office would be better than the pull scraping. The scheduler clock runs in local time (BST) as are the publication times. let $login := xmldb:login( "/db", "admin", "admin password" ) let $job1 := scheduler:schedule-xquery-cron-job("/db/Wiki/Met/getandsave.xq?t=1" , "0 16 0 * * ?") let $job2 := scheduler:schedule-xquery-cron-job("/db/Wiki/Met/getandsave.xq?t=2" , "0 6 5 * * ?") let $job3 := scheduler:schedule-xquery-cron-job("/db/Wiki/Met/getandsave.xq?t=3" , "0 31 11 * * ?") let $job4 := scheduler:schedule-xquery-cron-job("/db/Wiki/Met/getandsave.xq?t=4" , "0 26 17 * * ?") return ($job1, $job2, $job3, $job4) Forecast as kmlEdit Sea area coordinatesEdit The UK Met Office provides a clickable map of forecasts but a KML map would be nice. The coordinates of the sea areas can be captured and manually converted to XML. <?xml version="1.0" encoding="UTF-8"?> <boundaries> <boundary area="viking"> <point latitude="61" longitude="0"/> <point latitude="61" longitude="4"/> <point latitude="58.5" longitude="4"/> <point latitude="58.5" longitude="0"/> </boundary> ... The boundary for an area is accessed by two functions. In this idiom one function hides the document location and returns the root of the document. Subsequence functions use this base function to get the docuement and then apply further predicates to filter as required. declare function met:area-boundaries() as element(boundaries) { doc("/db/Wiki/Met/shippingareas.xml")/boundaries }; declare function met:area-boundary($area as xs:string) as element(boundary) { met:area-boundaries()/boundary[@area=$area] }; The centre of an area can be roughly computed by averaging the latitudes and longitudes: declare function met:area-centre($boundary as element(boundary)) as element(point) { <point latitude="{round(sum($boundary/point/@latitude) div count($boundary/point) * 100) div 100}" longitude="{round(sum($boundary/point/@longitude) div count($boundary/point) * 100) div 100}" /> }; kml PlacemarkEdit We can generate a kml PlaceMark from a forecast: declare function met:forecast-to-kml($forecast as element(forecast)) as element(Placemark) { let $area := $forecast/@area let $boundary := met:area-boundary($area) let $centre := met:area-centre($boundary) return <Placemark > <name>{string($forecast/areapresentation)}</name> <description> {met:forecast-as-text($forecast)} </description> <Point> <coordinates> {string-join(($centre/@longitude,$centre/@latitude),",")} </coordinates> </Point> </Placemark> }; kml area areaEdit Since we have the area coordinates, we can also generate the boundaries as a line in kml. declare function met:sea-area-to-kml( $area as xs:string, $showname as xs:boolean ) as element(Placemark) { let $boundary := met:area-boundary($area) return <Placemark > {if($showname) then <name>{$area}</name> else()} <LineString> <coordinates> {string-join( for $point in $boundary/point return string-join(($point/@longitude,$point/@latitude,"0"),",") , " " ) } </coordinates> </LineString> </Placemark> }; Generate the kml fileEdit import module namespace met = "" at "met.xqm"; (: set the media type for a kml file :) declare option exist:serialize "method=xml indent=yes media-type=application/vnd.google-earth.kml+xml"; (: set the file name ans extension when saved to allow GoogleEarth to be invoked :) let $dummy := response:set-header('Content-Disposition','inline;filename=shipping.kml;') (: get the latest forecast :) let $shippingForecast := met:get-stored-forecast() return <kml > <Folder> <name>{datetime:format-dateTime($shippingForecast/@at,"EEEE HH:mm")} UK Met Office Shipping forecast</name> {for $forecast in $shippingForecast/forecast return (met:forecast-to-kml($forecast), met:sea-area-to-kml($forecast/@area,false()) ) } </Folder> </kml> Push messagesEdit An alternative use of this data is to provide a channel to push the forecasts through as soon as they are received. The channel could be a SMS alert to subscribers or a dedicated Twitter stream which users could follow. Subscription SMSEdit This service should allow a user to request an alert for a specific area or areas. The application requires: - a data structure to record subscribers and their areas - a web service to register a user, their mobile phone number and initial area [to do] - an SMS service to change the required area and turn messaging on or off - a scheduled task to push the SMS messages when the new forecast has been obtained Document StructureEdit <subscriptions> <subscription> <username>Fred Bloggs</username> <password>hafjahfjafa</password> <mobilenumber>447777777</mobilenumber> <area>lundy</area> <status>off</status> </subscription> ... </subscriptions> XML SchemaEdit (to be completed) Access controlEdit Access to this document needs to be controlled. The first level of access control is to place the file in a collection which is not accessible via the web. In the UWE server, the root (via mod-rewrite) is the collection /db/Wiki so resources in this directory and subdirectories are accessible, subject to the access settings on the file, but files in parent or sibling directories are not. So this document is stored in the directory /db/Wiki2. The URL of this file, relative to the external root is but access fails. The second level of control is to set the owner and permissions on the file. This is needed because a user on a client behind the firewall, using the internal server address, will gain access to this file. By default, world permissions are set to read and update. Removing this access requires the script to login to read as group or owner. Ownership and permissions can be set either via the web client or by functions in the eXist xmldb module. SMS pushEdit This function takes a subscription, formulates a text message and calls a general sms:send function to send. This interfaces with our SMS service provider. declare function met:push-sms($subscription as element(subscription)) as element(result) { let $area := $subscription/area let $forecast := met:get-stored-forecast($area) let $time := datetime:format-dateTime($forecast/../@at,"EE HH:mm") let $text := encode-for-uri(concat($area, " ",$time," ",met:abbreviate(met:forecast-as-text($forecast)))) let $number := $subscription/mobilenumber let $sent := sms:send($number,$text) return <result number="{$number}" area="{$area}" sent="{$sent}"/> }; SMS push subscriptionsEdit First we need to get the active subscriptions. The functions follow the same idiom used for boundaries: declare function met:subscriptions() { doc("/db/Wiki2/shippingsubscriptions.xml")/subscriptions }; declare function met:active-subscriptions() as element(subscription) * { met:subscriptions()/subscription[status="on"] }; and then to iterate through the active subscriptions and report the result: declare function met:push-subscriptions() as element(results) { <results> { let $dummy := xmldb:login("/db","webuser","password") for $subscription in met:active-subscriptions() return met:push-sms($subscription) } </results> }; This script iterates through the subscriptions currently active and calls the push-SMS function for each one. import module namespace met = "" at "met.xqm"; met:push-subscriptions() This task could be scheduled to run after the caching task has run or the caching script modified to invoke the subscription task when it has completed. However eXist also supports triggers so the task could also be triggered by the database event raised when the forecast file store has been completed. Subscription editing by SMSEdit A message format is required to edit the status of the subscription and to change the subscription area: metsub [ on |off |<area> ] If the area is changed the status is set to on. The area is validated against a list of area codes. These are extracted from the boundary data: declare function met:area-names() as xs:string* { met:area-boundaries()/boundary/string(@area) }; import module namespace met = "" at "met.xqm"; let $login:= xmldb:login("/db","user","password") let $text := normalize-space(request:get-parameter("text",())) let $number := request:get-parameter("from",()) let $subscription := met:get-subscription($number) return if (exists($subscription)) then let $update := if ( $text= "on") then update replace $subscription/status with <status>on</status> else if( $text = "off") then update replace $subscription/status with <status>off</status> else if ( lower-case($text) = met:area-names()) then ( update replace $subscription/area with <area>{$text}</area>, update replace $subscription/status with <status>on</status> ) else () return let $subscription := met:get-subscription($number)(: get the subscription post update :) return concat("Reply: forecast is ",$subscription/status," for area ",$subscription/area) else () TwitterEdit Twitter has a simple REST API to update the status. We can use this to tweet the forecasts to a Twitter account. Twitter uses Basic Access Authentication and a suitable XQuery function to send a message to a username/password, using the eXist httpclient module is : declare function met:send-tweet ($username as xs:string,$password as xs:string,$tweet as xs:string ) as xs:boolean { let $uri := xs:anyURI("") let $content :=concat("status=", encode-for-uri($tweet)) let $headers := <headers> <header name="Authorization" value="Basic {util:string-to-binary(concat($username,":",$password))}"/> <header name="Content-Type" value="application/x-www-form-urlencoded"/> </headers> let $response := httpclient:post( $uri, $content, false(), $headers ) return $response/@statusCode='200' }; A script is needed to access the stored forecast and tweet the forecast for an area. Different twitter accounts could be set up for each shipping area. The script will need to be scheduled to run after the the full forecast has been acquired. In this example, the forecast for given are is tweeted to a hard-coded twitterer: import module namespace met = "" at "met.xqm"; declare variable $username := "kitwallace"; declare variable $password := "mypassword"; declare variable $area := request:get-parameter("area","lundy"); let $forecast := met:get-stored-forecast($area) let $time := datetime:format-dateTime($forecast/../@at,"HH:mm") let $message := concat($area," at ",$time,":",met:abbreviate(met:forecast-as-text($forecast))) return <result>{met:send-tweet($username,$password,$message)}</result> To doEdit Creating and editing subscriptionsEdit This task is ideal for XForms. TriggersEdit Use a trigger to push the SMS messages when update has been done.
http://en.m.wikibooks.org/wiki/XQuery/UK_shipping_forecast
CC-MAIN-2014-15
en
refinedweb
Lars Marius Garshol wrote: > > * L. :) Per my other email, SMIL is a protocol defined to use XML namespaces. CWI has been working on applications, for a long while now, that use SMIL (note that Jack works at CWI, and I'd guess *on* that project). WebDAV is no small potatoes either :-) -g -- Greg Stein,
https://mail.python.org/pipermail/xml-sig/1998-November/000480.html
CC-MAIN-2014-15
en
refinedweb
#include <Xm/XmIm.h> XIM XmImGetXIM( Widget widget); XmImGetXIM retrieves the XIM data structure representing the input method that the input manager has opened for the specified widget. If an input method has not been opened by a previous call to XmImRegister, the first time this routine is called it opens an input method using the XmNinputMethod resource for the VendorShell. If the XmNinputMethod is NULL, an input method is opened using the current locale. If it cannot open an input method, the function returns NULL. Returns the input method for the current locale associated with the specified widget's input manager; otherwise, returns NULL. The application is responsible for freeing the returned XIM by calling XmImCloseXIM. XmImCloseXIM(3), XmImGetXIM(3), XmImMbLookupString(3), and XmImRegister(3).
http://www.makelinux.net/man/3/X/XmImGetXIM
CC-MAIN-2014-15
en
refinedweb
ZF-2544: Add message type as argument to FlashMessenger. Description Currently you can't specify what type of message you're sending with the FlashMessenger in Zend_Controller. It's just a message. I would like to be able to differentiate the messages I'm sending via FlashMessenger (e.g. error, warning, information, tip, etc.) so users can see more clearly what's going on. Therefore I propose to enhance the addMessage method of the FlashMessenger with an additional argument that allows setting the type of message. Change: addMessage (string $message, string $namespace) To: addMessage (string $message, string $message_type, string $namespace) Posted by Matthew Weier O'Phinney (matthew) on 2008-02-14T11:04:34.000+0000 This issue duplicates ZF-1705 Posted by Wil Sinclair (wil) on 2008-03-25T20:41:12.000+0000 Please categorize/fix as needed. Posted by Matthew Weier O'Phinney (matthew) on 2008-04-22T13:39:51.000+0000 Assigning to Ralph to evaluate and schedule. Posted by Sean P. O. MacCath-Moran (emanaton) on 2009-04-25T20:24:13.000+0000 Greetings All, FYI, I've created a PriorityMessenger - please check it out and tell me what you think? Regards, Sean P. O. MacCath-Moran Posted by Marc Hodgins (mjh_ca) on 2010-11-26T22:39:33.000+0000 Closing as duplicate of ZF-1705
http://framework.zend.com/issues/browse/ZF-2544?focusedCommentId=43350&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2014-15
en
refinedweb
Plugin::Installer Call the plugin's compiler, install it via quality_to_ref, then dispatch it using goto. package Myplugin; use base qw( Plugin::Installer Plugin::Language::Foobar ); ... my $plugin = Myplugin->construct; # frobnicate is passed first to Plugin::Installer # via AUTOLOAD, then to P::L::Foobar's compile # method. if what comes back from the compiler is # a referent it is intalled in the P::L::F namespace # and if it is a code referent it is dispatched. $plugin->frobnicate; The goal of this module is to provide a simple, flexable interface for developing plugin languages. Any language that can store its object data as a hash and implement a "compile" method that takes the method name as an argument can use this class. The Plugin framework gives runtime compile, install, and dispatch of user-defined code. The code doesn't have to be Perl, just something that the object handling it can compile. The installer is language-agnostic: in fact it has no idea what the object does with the name passed to its compioer. All it does is (by default) install a returned reference and dispatch coderefs. This is intended as a convienence class that standardizes the top half of any plugin language. Note that any referent returned by the compiler is installed. Handing back a hashref can deposit a hash into the caller's namespace. This allows for plugins to handle command line switches (via GetoptFoo and a hashref) or manipulate queues (by handing back an [udpated] arrayref. By default coderefs are dispatched via goto, which allows the obvious use of compiling the plugin to an anonymous sub for later use. This make the plugins something of a trampoline object, with the exception that the "trampolines" are the class' methods rather than the object itself. Extracts the package and method name from a call, dispatches $package->compile( $name ), and handles the result. Results can be installed (if they are referents of any type) and dispatched (if they are coderefs). The point of this is that the pluing language is free to compile the plugin source to whatever suits it best, Plugin::Installer will install the result. In most cases the result will be a coderef, which will be installed as $AUTOLOAD, which allows plugins to resolve themselves from source to method at runtime. Stub, saves passing back through the AUTOLOAD unnecessarly. Plugin classes that need housekeeping should implement a DESTROY of their own. During compilation, Plugin::Install::AUTOLOAD places an {install_meta} entry into the object. This is done via local hash value, and will not be visible to the caller after the autoloader has processed the call. This defines switches used for post-compile handling: my $default_meta = { install => 1, dispatch => 1, storemeta => 0, alt_package => '', }; Does a referent returned from $obj->compile get installed into the namespace or simply dispatched? This is used to avoid installing plugins whose contents will be re-defined during execution and called multiple times. Is a code referent dispatched (whether or not it is installed into a package)? Some methods may be easier to pre-install but not dispatch immediately (e.g. if they involve expensive startup but have per-execution side-effects). Setting this to false will skip dispatch of coderef's even if they are installed. Package to install the referent into (default if false is the object's package). This the namespace passed with the method name to 'qualify_to_ref'. This can be used by the compiler to install data or coderef's into a caller's namespace (e.g. via caller(2)). If this is used with storemeta then ALL of the methods for the plugin class will be installed into the alternate package space unless they set their own alt_package when called. Store the current metdata as the default for this class? The metadata is stored by class name, allowing an initial "startup" call (say in the constructor or import) to configure appropriate defaults for the entire class. Note that if install is true for a coderef then none of these matter much after the first call since the installed method will bypass the AUTOLOAD. Corrilary: If a "setup" method is used to set metadata values then it probably should not be installed so that it can fondle the class' metadata and modify it if necewsary on later calls. This also means that plugin languages should implement some sort of instructions to modify the metadata. Example plugin class with simple, working compiler. Little language for bulk data filtering, including pre- and post-processing DBI calls; uses Plugin::Install to handle method installation. Installing symbols without resoting to no strict 'refs'. Extracting the basetype of a blessed referent. Trampoline object: construction and initilization are put off until a method is called on the compiled object. Steven Lembark <lembark@wrkhors.com> Florian Mayr <florian.mayr@gmail.com> Copyright (C) 2005 by the authors; this code can be reused and re-released under the same terms as Perl-5.8.0 or any later version of the Perl.
http://search.cpan.org/~lembark/Plugin-Installer-0.04/lib/Plugin/Installer.pm
CC-MAIN-2014-15
en
refinedweb
I'm trying to read the "from" and "subject" fields from the messages in a POP3 mailbox (I don't care about the content of the messages), and have cobbled together the following code from online examples: use Net::POP3; # $MAILSERVER,$MAILUSER,$MAILPASS defined here! # Constructors $pop = Net::POP3->new($MAILSERVER); if ($pop->login($MAILUSER, $MAILPASS) > 0) { my $msgnums = $pop->list; # hashref of msgnum => size foreach my $msgnum (keys %$msgnums) { my $head = $pop->top($msgnum,0); my ($subject, $from) = analyze_header($head); print "From: $from ; Subject: $subject \n"; } } $pop->quit; print "done.\n"; sub analyze_header { my $header_array_ref = shift; my $header = join "", @$header_array_ref; my ($subject) = $header =~ /Subject: (.*)/m; my ($from ) = $header =~ /From: (.*)/m; return ($subject, $from); } [download] It works, insofar as it successfully logs in and reads the (two identical) messages, but this is the output I get: starting... From: "=?utf-8?B?Q2hyaXMgSHVudA==?=" <chris@example.com> ; Subject: =? +utf-8?B?VGVzdA==?= From: "=?utf-8?B?Q2hyaXMgSHVudA==?=" <chris@example.com> ; Subject: =? +utf-8?B?VGVzdA==?= done. [download] The contents of the header fields appear (I assume) to be encoded somehow into utf-8, but how do I decode it into something I can make sense of? I assume there must be some standard method of doing this, but none of the documentation I could find gives me any help.. A foolish day Just another day Internet cleaning day The real first day of Spring The real first day of Autumn Wait a second, ... is this poll a joke? Results (433 votes), past polls
http://www.perlmonks.org/?node_id=1004630
CC-MAIN-2014-15
en
refinedweb
This document describes OAuth 2.0, when to use it, how to acquire client IDs, and how to use it with the Google APIs Client Library for Python. Contents - OAuth 2.0 explained - Acquiring client IDs and secrets - The oauth2client library - Flows - Credentials - Storage - Command-line tools OAuth 2.0 explained OAuth 2.0 is the authorization protocol used by Google APIs. It is summarized on the Authentication page of this library's documentation, and there are other good references as well: The protocol is solving a complex problem, so it can be difficult to understand. The following presentation explains the important concepts of the protocol, and introduces you to how the library is used at each step. Acquiring client IDs and secrets You can get client IDs and secrets on the API Access pane of the Google APIs Console. There are different types of client IDs, so be sure to get the correct type for your application: - Web application client IDs - Installed application client IDs - Service Account client IDs Warning: Keep your client secret private. If someone obtains your client secret, they could use it to consume your quota, incur charges against your Google APIs Console project, and request access to user data. The oauth2client library The oauth2client library is included with the Google APIs Client Library for Python. It handles all steps of the OAuth 2.0 protocol required for making API calls. It is available as a separate download if you only need an OAuth 2.0 library. The sections below describe important modules, classes, and functions of this library. Flows The purpose of a Flow class is to acquire credentials that authorize your application access to user data. In order for a user to grant access, OAuth 2.0 steps require your application to potentially redirect their browser multiple times. A Flow object has functions that help your application take these steps and acquire credentials. Flow objects are only temporary and can be discarded once they have produced credentials, but they can also be pickled and stored. This section describes the various methods to create and use Flow objects. Note: See the Using Google App Engine and Using Django pages for platform-specific Flows. flow_from_clientsecrets() The oauth2client.client.flow_from_clientsecrets() method creates a Flow object from a client_secrets.json file. This JSON formatted file stores your client ID, client secret, and other OAuth 2.0 parameters. The following shows how you can use flow_from_clientsecrets() to create a Flow object: from oauth2client.client import flow_from_clientsecrets ... flow = flow_from_clientsecrets('path_to_directory/client_secrets.json', scope='', redirect_uri='') OAuth2WebServerFlow Despite its name, the oauth2client.client.OAuth2WebServerFlow class is used for both installed and web applications. It is created by passing the client ID, client secret, and scope to its constructor: You provide the constructor with a redirect_uri parameter. This must be a URI handled by your application. from oauth2client.client import OAuth2WebServerFlow ... flow = OAuth2WebServerFlow(client_id='your_client_id', client_secret='your_client_secret', scope='', redirect_uri='') step1_get_authorize_url() The step1_get_authorize_url() function of the Flow class is used to generate the authorization server URI. Once you have the authorization server URI, redirect the user to it. The following is an example call to this function: auth_uri = flow.step1_get_authorize_url() // Redirect the user to auth_uri on your platform. If the user has previously granted your application access, the authorization server immediately redirects again to redirect_uri. If the user has not yet granted access, the authorization server asks them to grant your application access. If they grant access, they get redirected to redirect_uri with a code query string parameter similar to the following: If they deny access, they get redirected to redirect_uri with an error query string parameter similar to the following: step2_exchange() The step2_exchange() function of the Flow class exchanges an authorization code for a Credentials object. Pass the code provided by the authorization server redirection to this function: credentials = flow.step2_exchange(code) Credentials A Credentials object holds refresh and access tokens that authorize access to a single user's data. These objects are applied to httplib2.Http objects to authorize access. They only need to be applied once and can be stored. This section describes the various methods to create and use Credentials objects. Note: See the Using Google App Engine and Using Django pages for platform-specific Credentials. OAuth2Credentials The oauth2client.client.OAuth2Credentials class holds OAuth 2.0 credentials that authorize access to a user's data. Normally, you do not create this object by calling its constructor. A Flow object can create one for you. SignedJwtAssertionCredentials The oauth2client.client.SignedJwtAssertionCredentials class is only used with OAuth 2.0 Service Accounts. No end-user is involved for these server-to-server API calls, so you can create this object directly without using a Flow object. AccessTokenCredentials The oauth2client.client.AccessTokenCredentials class is used when you have already obtained an access token by some other means. You can create this object directly without using a Flow object. authorize() Use the authorize() function of the Credentials class to apply necessary credential headers to all requests made by an httplib2.Http instance: import httplib2 ... http = httplib2.Http() http = credentials.authorize(http) Once an httplib2.Http object has been authorized, it is typically passed to the build function: from apiclient.discovery import build ... service = build('calendar', 'v3', http=http) Storage A oauth2client.client.Storage object stores and retrieves Credentials objects. This section describes the various methods to create and use Storage objects. Note: See the Using Google App Engine and Using Django pages for platform-specific Storage. file.Storage The oauth2client.file.Storage class stores and retrieves a single Credentials object. The class supports locking such that multiple processes and threads can operate on a single store. The following shows how to open a file, save Credentials to it, and retrieve those credentials: from oauth2client.file import Storage ... storage = Storage('a_credentials_file') storage.put(credentials) ... credentials = storage.get() multistore_file The oauth2client.multistore_file module allows multiple credentials to be stored. The credentials are keyed off of: - client ID - user agent - scope keyring_storage The oauth2client.keyring_storage module allows a single Credentials object to be stored in a password manager if one is available. The credentials are keyed off of: - Name of the client application - User name from oauth2client.keyring_storage import Storage ... storage = Storage('application name', 'user name') storage.put(credentials) ... credentials = storage.get() Command-line tools The oauth2client.tools.run_flow() function can be used by command-line applications to acquire credentials. It takes a Flow argument and attempts to open an authorization server page in the user's default web browser. The server asks the user to grant your application access to the user's data. If the user grants access, the run() function returns new credentials. The new credentials are also stored in the Storage argument, which updates the file associated with the Storage object. The oauth2client.tools.run_flow() function is controlled by command-line flags, and the Python standard library argparse module must be initialized at the start of your program. Argparse is included in Python 2.7+, and is available as a separate package for older versions. The following shows an example of how to use this function: import argparse from oauth2client import tools parser = argparse.ArgumentParser(parents=[tools.argparser]) flags = parser.parse_args() ... credentials = tools.run_flow(flow, storage, flags)
https://developers.google.com/api-client-library/python/guide/aaa_oauth
CC-MAIN-2014-15
en
refinedweb
Details Description The fully qualified class name must be set correctly to support conversions like %C. In some cases, %C information is incorrect. I think this is related to AbstractLogger allowing getFQCN() to be overridden. Log methods handled by AbstractLogger should have AbstractLogger's FQCN, not that of the subclass. Example: slf4jLogger.info("slf4jLogger test %C."); // works; SLF4JLogger overrides this method slf4jLogger.info("slf4jLogger test %C.", t); // fails with '?'; handled by AbstractLogger category2.info("category test %C"); // fails with 'o.a.l.Category' Output: INFO Log4j2Testing [main] slf4jLogger test %C. INFO ? [main] slf4jLogger test %C. java.lang.Throwable at Log4j2Testing.main(Log4j2Testing.java:22) INFO o.a.l.Category [main] category test %C Activity(); Actually, in the original report I was using log4j12-api. It was a puzzle to determine why your test passed while mine failed. My code looked like: import org.apache.log4j.Category; private static final org.apache.logging.log4j.Logger log4j2Logger = LogManager.getLogger(Log4j2Testing.class.getName()); private static final Category category2 = Category.getInstance(Log4j2Testing.class); ... As it turns out, the order in which the loggers were created changed the result. See LOG4J2-51. I was able to get correct results by flipping the order of the getLogger/getInstance calls. In your patch for the SLF4J API, the problems with slf4jLogger are resolved since SLF4JLogger no longer extends AbstractLogger. But a problem remains with the getFQCN() logic. I have a patch to correct this. It builds on the patches in LOG4J2-51. Note: LOG4J2-51 temporarily causes your new test case to fail, but the forthcoming patch fixes the root cause.. Patch applied. Please verify and close.
https://issues.apache.org/jira/browse/LOG4J2-50?focusedCommentId=13115259&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2014-15
en
refinedweb
Disclaimer: at the moment of writing this article mkdev is not running containers in production. Images built below are only used for development, tests and CI system and are never run on production servers. Once mkdev decides to use containers in production, the contents and setup of our container images will change to be actually suitable for prod. Keep this in mind when reading this post. In the previous article we looked at all the reasons why you would want to taste a Dockerless life. We decided to try two new tools that will replace Docker: Buildah and Podman. In this article we will learn what Buildah is and how to use it to put your Ruby on Rails application into a container. What is a container image? Before we learn the tool, let's first learn what a container image is by reading the article A sysadmin's guide to containers. From there we learn that container image is a TAR file of two things: - Container root filesystem. To simply say, it's a directory with all the regular directories you would expect to be inside the container, like /usr, /home etc. - JSON file, a config file that defines how to run this root filesystem -- which commands to execute, which environment variables to set and so on. Contents of container image are defined in OCI image spec, your go-to destination if you want to learn more about the structure of container images. It might sound crazy, but you don't have to use image-spec for container images, you can use it for other things too. What is Buildah? Buildah is a container image builder tool, that produces OCI-compliant images. It is distributed as a single binary and is written in Go. Buildah is available as a package in most of modern Linux distributions, just follow official installation instructions. Buildah can only be used to manipulate images. It's job is to build container images and push them to registries. There is no daemon involved. Neither does Buildah require root privileges to build images. This makes Buildah especially handy as part of a CI/CD pipeline -- you can easily run Buildah inside a container without granting this container any root rights. To me personally the whole Docker in Docker setup required on container-based CI systems (Gitlab CI with Docker executor, for example) just to be able to build new container image felt a bit of an overkill. With Buildah there is no need for this, due to the narrow focus on things it needs to do well and things it should not do at all. One place where Buildah appears to be very useful is BuildConfigurations in OpenShift. Starting from OpenShift 4.0 BuildConfigs will rely on Buildah instead of Docker, thus removing the need to share any sockets or having privileged containers inside the OpenShift platform. Needless to say that it results in a more secure and cleaner way to build container images inside one of most popular container platforms out there. Images built by Buildah can be used by Docker without any issues. They are not "Buildah Images", but rather just "Container Images", they follow OCI specification, which is understood by Docker as well. So how do we build an image with Buildah? With Buildahfile Just kidding, there is actually no Buildahfile involved. Instead, Buildah can just read Dockerfiles, making transition from Docker to Buildah as easy as it can get. At mkdev we use Mattermost at the core of our messaging platform. It is important that we are able to run Mattermost locally to be able to easily develop integrations between primary web application and the messaging system. Even though Mattermost already provides official Docker images, we had to build our own due to the way we prefer to configure it and also to make it easier to run ephemeral test instances of Mattermost. We also want to pre-install certain Mattermost plugins that our mentors rely on. So we took the official Dockerfile, modified it a bit and fed it to Buildah: FROM alpine:3.9 # Some ENV variables ENV PATH="/opt/mattermost/bin:${PATH}" ENV MM_VERSION=5.8.0 # Set defaults for the config ENV MMDBCON=localhost:5432 \ MMDBKEY=XXXXXXXXXXXX \ MMSMTPUSERNAME=postfix \ MMSMTPPASSWORD=secrets \ MMSMTPSALT=XXXXXXXXXXXX \ MMGITHUBSECRET=secret \ MMGITHUBHOOK=localhost # Build argument to set Mattermost edition ARG PUID=2000 ARG PGID=2000 # Install some needed packages RUN apk add --no-cache \ ca-certificates \ curl \ jq \ libc6-compat \ libffi-dev \ linux-headers \ mailcap \ netcat-openbsd \ xmlsec-dev \ && rm -rf /tmp/* ## Get Mattermost RUN mkdir -p /opt/mattermost/data /opt/mattermost/plugins /opt/mattermost/client/plugins \ && cd /opt \ && curl | tar -xvz \ && curl -L -o /tmp/github.tar.gz \ && cd /opt/mattermost/plugins \ && tar -xvf /tmp/github.tar.gz COPY files/entrypoint.sh / COPY files/mattermost.json /opt/mattermost/config/config.json RUN chmod +x /entrypoint.sh \ && addgroup -g ${PGID} mattermost \ && adduser -D -u ${PUID} -G mattermost -h /mattermost -D mattermost \ && chown -R mattermost:mattermost /opt/mattermost /opt/mattermost/plugins /opt/mattermost/client/plugins USER mattermost # Configure entrypoint and command ENTRYPOINT ["/entrypoint.sh"] WORKDIR /opt/mattermost CMD ["mattermost"] # Expose port 8000 of the container EXPOSE 8065 # Declare volumes for mount point directories VOLUME ["/opt/mattermost/data", "/opt/mattermost/logs", "/opt/mattermost/config", "/opt/mattermost/plugins", "/opt/mattermost/client/plugins"] If it looks to you just like any other regular Dockerfile then only because it is, in fact, just a regular Dockerfile. Let's run Buildah: buildah bud -t docker.io/mkdevme/mattermost:5.8.0 . The output that will follow is similar to what you see when you run docker build . command. The resulting image will be stored locally, you can see it when you run buildah images command. Nice little feature of Buildah is that your images are user-specific, meaning that only the user who built this image is able to see and use it. If you run buildah images as any other system user, you won't see anything. This is different from Docker, where docker images always list same set of images for all the users. Once you built the image, you can push it to the registry. Buildah supports multiple transports to push your image. Some transport examples are docker-daemon -- if you still have Docker running locally and you want this image to be seen by Docker, docker -- if you want to push the image to Docker API compatible remote registry. There are other transports that are not Docker-specific: oci, containers-storage, dir etc. Nothing stops you from using Buildah to push the image to Docker Hub, if that's your registry of choice. By using Buildah we are not thinking in terms of Docker Images. It's more like if we would have a Git repository that we could push to GitHub, GitLab or BitBucket. Same way we can push our Container Image to the registry of choice -- Docker Hub, Quay, AWS ECR and others. Inspecting the image One of the transports Buildah supports is dir. When you push your image to dir, which is just a directory on filesystem, Buildah will store there tarballs for the layers and configuration of your image and a JSON manifest file. This is only useful for debugging and perfect for seeing the internals of an image. Create some directory and run buildah push IMAGE dir:/$(pwd). I don't expect you to actually build a Mattermost image, just use any other image. If you don't have any and don't want to build any, then just buildah pull any image from Docker Hub. Once finished, you will see files with names like 96c6e3522e18ff696e9c40984a8467ee15c8cf80c2d32ffc184e79cdfd4070f6, which is actually a tarball. You can untar this file into a destination of your choice and see all the files inside this image layer. You will also see an image manifest.json file, in case of Mattermost it looks like this: { "schemaVersion": 2, "config": { "mediaType": "application/vnd.oci.image.config.v1+json", "digest": "sha256:57ea4e4c7399849779aa80c7f2dd3ce4693a139fff2bd3078f87116948d1991b", "size": 1262 }, "layers": [ { "mediaType": "application/vnd.oci.image.layer.v1.tar", "digest": "sha256:6bb94ea9af200b01ff2f9dc8ae76e36740961e9a65b6b23f7d918c21129b8775", "size": 2832039 }, { "mediaType": "application/vnd.oci.image.layer.v1.tar", "digest": "sha256:96c6e3522e18ff696e9c40984a8467ee15c8cf80c2d32ffc184e79cdfd4070f6", "size": 162162411 } ] } Image manifest is described by OCI spec. If you look closely at the example above, it defines two layers (vnd.oci.image.layer.v1.tar) and one config file (vnd.oci.image.config.v1+json). We can see that the config has a digest 57ea4e4c7399849779aa80c7f2dd3ce4693a139fff2bd3078f87116948d1991b. We have this file as well and though it looks just like layer files, it's actually a config file of the image. This might be a bit confusing, but keep in mind that this structure was created for other software to store and process, not for the human eye to read. If you need to quickly figure which file in the image stores the config, always look at the manifest.json first: { "created": "2019-05-12T16:13:28.951120907Z", "architecture": "amd64", "os": "linux", "config": { "User": "mattermost", "ExposedPorts": { "8065/tcp": {} }, "Env": [ "PATH=/opt/mattermost/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "MM_VERSION=5.8.0", "MMDBCON=localhost:5432", "MMDBKEY=XXXXXXXXXX", "MMSMTPUSERNAME=postfix", "MMSMTPPASSWORD=secrets", "MMSMTPSALT=XXXXXXXXXX", "MMGITHUBSECRET=secret", "MMGITHUBHOOK=localhost" ], "Entrypoint": [ "/entrypoint.sh" ], "Cmd": [ "mattermost" ], "Volumes": { "/opt/mattermost/client/plugins": {}, "/opt/mattermost/config": {}, "/opt/mattermost/data": {}, "/opt/mattermost/logs": {}, "/opt/mattermost/plugins": {} }, "WorkingDir": "/opt/mattermost" }, "rootfs": { "type": "layers", "diff_ids": [ "sha256:f1b5933fe4b5f49bbe8258745cf396afe07e625bdab3168e364daf7c956b6b81", "sha256:462e838baed1292fb825d078667b126433674cdc18c1ba9232e2fb8361fc8ac2" ] }, "history": [ { "created": "2019-05-11T00:07:03.358250803Z", "created_by": "/bin/sh -c #(nop) ADD file:a86aea1f3a7d68f6ae03397b99ea77f2e9ee901c5c59e59f76f93adbb4035913 in / " }, { "created": "2019-05-11T00:07:03.510395965Z", "created_by": "/bin/sh -c #(nop) CMD [\"/bin/sh\"]", "empty_layer": true }, { "created": "2019-05-12T16:13:28.951120907Z" } ] } So, just a bunch of tarballs and json files -- that's the whole container image! You say Dockerless but you still rely on Dockerfile! Creators of Buildah intentionally decided not to introduce new DSL for defining container images. Buildah gives you two ways to define an image: a Dockerfile or a sequence of buildah commands. We will learn the second way shortly, but I must warn you that I don't think Dockerfiles will disappear anytime soon. And there is probably nothing that runs with them, except the name itself. Imagine investing into going Dockerless only to find yourself still writing Dockerfiles! I wish they would be called Containerfiles or Imagefiles. That would be much less awkward for the community. But as of now, convention is to name this file a Dockerfile and we simply have to deal with it. Building images with Buildah directly The second way to build an image with Buildah is by using buildah commands. The way Buildah builds images is by creating a new container from a base image and then running commands inside this container. After all commands had run, you can commit this container to become an image. Let's build an image this way and then discuss if and when is this better than writing a Dockerfile. We first need to start a new container from the existing image: buildah from centos:7 If image doesn't exist yet, it will be pulled from the registry, just like when you use Docker. buildah from command will return the name of the container that was started, normally it's "IMAGE_NAME-working-container", and in our case it's centos-working-container. We need to remember to use this name for all of the future commands. We can run commands inside this container with buildah run command: buildah run centos-working-container -- yum install unzip -y And we can configure various OCI-compliant options for the future image with buildah config command, for example environment variable: buildah config -e ENVIRONMENT=test centos-working-container We can also mount the complete container filesystem inside of the build server and manipulate it directly from the host with the tools installed on the host. It is useful when we don't want to install certain tools inside the image just to do some build-time manipulations. Keep in mind that in this case you need to make sure all these tools are installed on the machine of anyone who wants to build your image (which then kind of ruins the portability of your build script). buildah mount centos-working-container In return Buildah will give you the location of a mounted filesystem, for example /home/fodoj/.local/share/containers/storage/overlay/DIGEST/merged. Just to test, we can then create a file there: touch hello-from-host /home/fodoj/.local/share/containers/storage/overlay/DIGEST/merged/home/. Once we are happy with the image, we can commit it: buildah commit centos-working-container my-first-buildah-image And remove the working container: buildah rm centos-working-container Note that even though Buildah does run containers, it provides no way to do it in a way that would be useful for anything but building images. Buildah is not a replacement for container engine, it only gives you some primitives to debug the process of building an image! Images built by Buildah are visible to Podman, which will be the topic of the next article. For now, if you want to verify that the file hello-from-host really exists, run this: image=$(buildah from my-first-buildah-image) ls $(buildah mount $image)/home $> hello-from-host This will create another working container. Mount it and show the content of /home directory. The way we did it is actually the way to go if you want to build images with Buildah and without a Dockerfile. Instead of a Dockerfile you should write a shell script that invokes all the commands, commits the image and removes the working container. That's how the "Buildahfile" (that's really just a shell script) for mkdev looks like: #!/bin/bash set -x mkdev=$(buildah from centos:7) buildah run "$mkdev" -- curl -L -o epel-release-latest-7.noarch.rpm buildah run "$mkdev" -- curl -L -o wkhtmltopdf.rpm buildah run "$mkdev" -- curl "" -o "awscli-bundle.zip" buildah run "$mkdev" -- rpm -ivh epel-release-latest-7.noarch.rpm buildah run "$mkdev" -- yum install centos-release-scl -y buildah run "$mkdev" -- yum install unzip postgresql-libs postgresql-devel ImageMagick \ autoconf bison flex gcc gcc-c++ gettext kernel-devel make m4 ncurses-devel patch \ rh-ruby25 rh-ruby25-ruby-devel rh-ruby25-rubygem-bundler rh-ruby25-rubygem-rake \ rh-postgresql96-postgresql openssl-devel libyaml-devel libffi-devel readline-devel zlib-devel \ gdbm-devel ncurses-devel gcc72-c++ \ python-devel git cmake python2-pip chromium chromedriver which -y buildah run "$mkdev" -- pip install ansible boto3 botocore buildah run "$mkdev" -- yum install wkhtmltopdf.rpm -y buildah run "$mkdev" -- ln -s /usr/local/bin/wkhtmltopdf /bin/wkhtmltopdf buildah run "$mkdev" -- unzip awscli-bundle.zip buildah run "$mkdev" -- ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws buildah run "$mkdev" -- yum clean all && rm -rf /var/cache/yum git archive -o app.tar.gz --format=tar.gz HEAD buildah add "$mkdev" app.tar.gz /app/ buildah add "$mkdev" infra/app/build/entrypoint.sh /entrypoint.sh buildah config --workingdir /app "$mkdev" buildah run "$mkdev" -- scl enable rh-ruby25 "bundle install" rm app.tar.gz buildah config --port 3000 "$mkdev" buildah config --entrypoint '[ "/entrypoint.sh" ]' "$mkdev" buildah run "$mkdev" -- chmod +x /entrypoint.sh buildah config --cmd "bundle exec rails s -b '0.0.0.0' -P /tmp/mkdev.pid" "$mkdev" buildah config --env LC_ALL="en_US.UTF-8" "$mkdev" buildah run "$mkdev" -- rm -rf /app/ buildah commit "$mkdev" "docker.io/mkdevme/app:dev" buildah rm "$mkdev" This script probably looks extremely stupid to you if you ever produced a good container image in your life. Let me explain some of the things that are happening there: - We use Centos 7 as a base image because in production we run on Centos 7. Even if we don't run containers in production just yet, it makes sense to keep the development environment as close to production one as possible. - We do install a ridiculous number of packages, including AWS CLI, Chromium, Software Collections and what not. We do it because we use the resulting image in development environment and in our CI system. Both of these locations require extra tooling to run integration tests (Chromium) or perform some packaging and deployment tasks (AWS CLI and Ansible). Software Collections are used in our production environment and it's important we use the same Ruby version in all other envs as well. - We remove the code of the application itself at the very end. For this use case, we don't really need the code to be in the image. In both development environment and CI we need the latest version of the code, not something baked into the image. We store this script inside the application repo, just like we would keep the Dockerfile there. Once we decide we want to run mkdev in containers in production, we can modify this script to do different things depending on the environment. You can use this approach only if your build server is able to run the shell script. This is not a problem because Windows has WSL, for example. Your host system doesn't have to be Linux based as long as it is able to run some kind of Linux inside! Will it work one day for MacOS users without extra Linux VM? Who knows, let's hope Buildah developers are working on it. How does Buildah work internally? Both Podman and Buildah work quite similar internally. They both make use of Linux kernel features, specifically user namespaces and network namespaces to make it possible to run containers without any root privileges. I won't talk about it in this article, but if you can't wait, then start by reading following resources: - How rootless Buildah works: building containers in unprivileged environments - Podman: A more secure way to run containers - Podman and user namespaces: A marriage made in heaven. What's next I hope you've learned a lot about container images today. Buildah is a great tool not only for local development, but for any kind of automation around building container images. It's not the only one available, Kaniko from Google being another example, though Kaniko is a bit more focused on Kubernetes environments. Now that we have an image in place, it's time to run it. In the next article I will show you how to use Podman to completely automate local development environment for a Ruby on Rails application. We will learn how to use Kube YAML feature of Podman to describe all the services in a Kubernetes-compliant YAML definition, how to run a Rails application a container and how to run tests of the Rails application in this container. Containers and Podman in particular will become really handy when we will start creating ephemeral Mattermost instances just for the integration testing. Feel free to ask any questions in the comments below, I will make sure to reply to them directly or extend this article! This is an mkdev article written by Kirill Shirinkin. You can hire our DevOps mentors to learn all about containerization yourself. Discussion The podman proposal is quite interesting. Mainly because it maintains the same interface as Docker and runs without daemon. However, in a more complex approach, it is still quite incipient. For example, in an approach with k8s.
https://practicaldev-herokuapp-com.global.ssl.fastly.net/mkdev/dockerless-part-2-how-to-build-container-image-for-rails-application-without-docker-and-dockerfile-48e8
CC-MAIN-2020-45
en
refinedweb
Author|Wang Xining Senior Technical Expert of Alibaba Guidance: This paper is abstracted from the book "Istio Service Grid Technology Analysis and Practice" written by Wang Xining, a senior technical expert in Ali Cloud. It describes how to use Istio to manage multi-cluster deployment to illustrate the support capability of service grid for multi-cloud environment, multi-cluster or mixed deployment. Previous details: How to use Istio for multi-cluster deployment management: Single Control Plane *** Connect Topology In a single control plane topology, multiple Kubernetes clusters work together to use a single Istio control plane running on one of the clusters.The control plane Pilot manages services on local and remote clusters and configures the Envoy Sidecar proxy for all clusters. Cluster-aware service routing Cluster-aware service routing capabilities were introduced in Istio 1.1, where Split-horizon EDS (Horizontal Split Endpoint Discovery Service) functionality of Istio can be used to route service requests to other clusters through its entry gateway in a single control plane topology configuration. (Cluster-aware service routing) As shown, the primary cluster Cluster 1 runs a full set of Istio control plane components, while Cluster 2 runs only Istio Citadel, Sidecar Injector, and Ingress gateways.No *** connection is required and no direct network access is required between workloads in different clusters. Generate intermediate CA certificates for each cluster's Citadel from the shared root CA, and the shared root CA enables bidirectional TLS communication across different clusters.For illustration purposes, we use the sample root CA certificate provided in the Istio installation in the samples/certs directory for both clusters.In a real deployment, you may use different CA certificates for each cluster, all of which are signed by a common root CA. In each of the Kubernetes clusters (including the clusters cluster1 and cluster2 in the example), use the following command to create Planar Component In 1. If helm dependencies are missing or not up to date, you can update them with helm dep update.It is important to note that because istio-cni is not used, you can temporarily remove it from the dependency requirements.yaml before performing. 3. 1, since both clusters communicate with the same Pilot, this gateway instance also applies to cluster 2. istio-remote component Deploy the istio-remote component in another cluster, cluster 2, by following these steps: 1. Get the entrance gateway address of cluster 1 first, as follows: export LOCAL_GW_ADDR=$(kubectl get svc --selector=app=istio-ingressgateway \ -n istio-system -o jsonpath="{.items[0].status.loadBalancer.ingress[0].ip}") Create an Istio remote deployment YAML file using Helm 1 to get the the namespace istio-system in cluster 1, replace the gateway address of network2, change from 0.0.0.0 to the gateway address of cluster 2 ${REMOTE_GW_ADDR}.After saving, Pilot automatically reads the updated network configuration. 4. Create Kubeconfig for Cluster 2.Create a Kubeconfig for the service account istio-multi on cluster 2 with the following command and save it as a file n2-k8s-config:. In Cluster Cluster, execute the following commands to add the above generated kubeconfig of Cluster 2 to the secret of Cluster 1. After these commands are executed, Istio Pilot in Cluster 1 will begin to listen for services and instances of Cluster 2, just as it does in the first Kubernetes cluster 1 and the helloworld service in version v1, deploy the helloworld service in version v2 in the second cluster 2, and then verify that the sleep application can invoke the helloworld service in a local or remote cluster. 1. Deploy the helloworld service from sleep and version v12 to cluster 2 the curl localhost:8080/v1/registration | grep helloworld-A 11-B 2 command. If you get similar results as the following, the HelloWorld services for versions V1 and v2 are already registered in the Istio control plane: 4. Verify that the sleep service in cluster 1 can properly invoke the helloworld service in a local or remote cluster by executing the following commands under cluster 1: kubectl exec -it -n app1 $SLEEP_POD sh Log in to the container and run curl helloworld.app1:5000/hello. If set correctly, you can see two versions of the helloworld service in the call results returned, and you can verify the IP address of the accessed endpoint by viewing the istio-proxy container log in the sleep container group. The result is as follows: Readers of Istio Service Grid Technology Analysis and Practice can experience ASM products for free!Click to learn about Ali Cloud Service Grid product ASM: Introduction to the Author Wang Xining, Senior Technical Expert of Ali Cloud, Technical Leader of Ali Cloud Service Grid ASM and Istio on Kubernetes, specializes in Kubernetes, Cloud Native, Service Grid and other fields.I have worked in IBM China Development Center and chaired the Patent Technology Review and Adjudication Board. I have over 40 international technology patents in related fields.Written by Istio Service Grid Resolution and Practice, the book details the basic principles and development practices of Istio. It contains a large number of selected cases and reference codes that can be downloaded and quickly started with Istio development.Gartner believes that the service grid will become the standard technology for all leading container management systems by 2020.This book is suitable for all readers interested in micro-services and cloud native. It is recommended that you read this book in depth. Course Recommendation In order for more developers to enjoy the dividend of Serverless, this time, we have assembled 10 + Alibaba Serverless field technical experts to create a Serverless public course that is best suited for developers to get started, so that you can learn from it and embrace the new paradigm of cloud computing - Serverless. Click to view the course for free: "Alibaba Cloud Native Focus on the technology areas such as micro services, Serverless, containers, Service Mesh, focus on cloud native popular technology trends, cloud native large-scale floor practices, and make a public number that best understands cloud native developers."
https://programmer.ink/think/single-control-plane-gateway-connection-topology.html
CC-MAIN-2020-45
en
refinedweb
Components Switch Switch enables a user to quickly toggle between two states. Switch is similar to a radio group in function but is used for quickly toggling between binary actions. Switch must always be accompanied by a label, and follows the same keyboard workflow as a checkbox. import { Switch } from '@sproutsocial/racine'
https://seeds.sproutsocial.com/components/switch/
CC-MAIN-2020-45
en
refinedweb
Difference between revisions of "Development Team/Chroot" Revision as of 21:59, 12 August 2009 Status - Debian chroot construction has been automated: see puritan-sugar.tar.bz2 and its README -- Michael Stone 20:33, 1 August 2009 (UTC) - Sugar continues to run happily in Squeeze chroots. --Michael Stone 16:30, 3 July 2009 (UTC) - Sugar is now somewhat runnable from chroots. Jaunty and Squeeze have been tested recently; Fedora has not. --Michael Stone 22:21, 23 May 2009 (UTC) Chroot Construction There are lots of ways to create appropriate chroots; e.g. by hand, with debootstrap, with mock, etc. Here are some ideas to help you get started: Ubuntu jaunty chroot With recent versions of debootstrap, in order to get a working chroot, you want something like: export CHROOT=`pwd`/jaunty-root sudo debootstrap --arch i386 jaunty $CHROOT sudo chroot $CHROOT /bin/bash -l mount -t proc proc /proc mount -t devpts devpts /dev/pts Debian squeeze chroot With debootstrap, in order to get a working chroot, you want something like: export CHROOT=`pwd`/sid-root sudo debootstrap --arch i386 squeeze $CHROOT sudo chroot $CHROOT /bin/bash -l # and some of the following: mount -t tmpfs tmpfs $CHROOT/tmp mount -t proc proc $CHROOT/proc mount -t devpts devpts $CHROOT/dev/pts mount -t selinuxfs selinux $CHROOT/selinux Reference: Note: you can use approx to cache packages across multiple runs for faster testing: apt-get install approx echo 'debian' >> /etc/approx/approx.conf /etc/init.d/approx restart sudo debootstrap --arch i386 squeeze $CHROOT Fedora rawhide chroot yum install febootstrap mock With mock, it would be more like: mock -r fedora-devel-i386 --init mock -r fedora-devel-i386 --install yum mock -r fedora-devel-i386 --shell Gentoo chroot Well, if you are familiar with regular Gentoo installation process(by using Handbook instead of GUI stuff) you should know how to setup Gentoo chtoot :). Otherwise use these instructions. - Firstly, you need Gentoo stage tarball. You can borrow it from any mirror (use releases/<platform>/current sub directory). - Lets think you are using stage3-i686-20090623.tar.bz2, then: mkdir chroot-gentoo tar xjpf stage3-i686-20090623.tar.bz2 -C chroot-gentoo mount -o bind /dev chroot-gentoo/dev mount -o bind /proc chroot-gentoo/proc chroot chroot-gentoo /bin/bash - To install sugar, use sugar-overlay or setup sugar-jhbuild environment(with sugar-overlay you can install git packages as well). Sugar Installation jaunty chroot sed -ie "s/main/main universe/" /etc/apt/sources.list apt-get update apt-get install locales locale-gen "$LANG" dpkg-reconfigure tzdata apt-get install sugar sugar-activities # install your development tools here # patch (hopefully temporary) bugs sed -ie '114i\\ if not favorites_settings.layout: favorites_settings.layout = favoriteslayout.RingLayout.key' /usr/lib/python2.6/dist-packages/jarabe/desktop/favoritesview.py squeeze chroot apt-get update apt-get install locales dpkg-reconfigure locales # edit /etc/hosts apt-get install education-desktop-sugar # install your development tools here # fix broken hippocanvas (Debian bug#522231) echo "deb-src squeeze main" >> /etc/apt/sources.list # echo "deb-src squeeze main" >> /etc/apt/sources.list apt-get update apt-get install apt-src devscripts apt-src install python-hippocanvas cd *hippo* DEB_BUILD_OPTIONS=nostrip debuild -us -uc cd .. dpkg -i *hippo*.deb User Accounts For stupid reasons, it's necessary that Sugar run under a uid inside the chroot which exists as a real account outside the chroot. (Talk to the DBus people.) Consequently, as root, run something like this both inside and outside the chroot: groupadd -g 64002 sugar useradd -m -u 64002 -g sugar -s /bin/bash sugar D-Bus Sugar wants to be able to use global state stored in both HAL and NetworkManager, both of which live on the system bus. Consequently, outside the chroot, we need to sudo mount --bind /var/run/dbus $CHROOT/var/run/dbus before entering the chroot. (Mock uses unshare() to enter a new mount-point namespace since this makes garbage collection of mountpoints much easier.) X11 We need to point Sugar at an X server. One easy (but insecure) way to do this is to make a nested X server like so, outside the chroot: Xephyr -ac :1 -screen 800x600x24 # 1024x768x24 See the talk page for more secure alternatives. Running Sugar Then, inside the chroot, you can happily run sugar as user 'sugar' with something like sudo chroot $CHROOT /bin/bash -l su sugar - cd ~ ulimit -c unlimited export DISPLAY=localhost:1 export DBUS_SESSION_BUS_ADDRESS=$(dbus-daemon --session --print-address --fork) sugar Then pull up the frame, switch to the home view, and launch some activities! Cleaning Up To correctly delete a chroot that you no longer need, kill all processes running in the chroot, and sudo killall -u sugar export CHROOT=/path/to/my/chroot # important! umount $CHROOT/var/run/dbus umount $CHROOT/proc umount $CHROOT/dev/pts umount $CHROOT/tmp rm -rf $CHROOT
http://wiki.sugarlabs.org/index.php?title=Development_Team/Chroot&diff=prev&oldid=35190
CC-MAIN-2020-45
en
refinedweb
TL. Authorization Models with Auth0 In a typical application, you might have different "tiers" of users. Let's say you have a blog and a database of users who interact with the blog. The first set of users, let's call them subscribers, can only view public blog posts. Next you have the users who manage the blog and are permitted to see the blog's dashboard. Even within that set of users you might have admins and editors, both of which have different permissions. So how would we control who can see what? A popular way to do this is with role based access control. We'd simply assign each of those users the role of either subscriber, admin, or editor and then associate certain permissions with that role. This is a great authorization model for a lot of applications, but sometimes as your application grows, you might find you're creating more and more one-off roles and permissions. Maybe you created a subsection of the blog that only users who have been subscribed for over a year have access to. Should you create a role specifically for that? In our application, we're going to use our flexible GraphQL API with Auth0's rules to implement two other options for handling these more complex authorization scenarios: Attribute-based Access Control and Graph-Based Access Control Attribute-based Access Control (ABAC) Attribute-based access control means we're authorizing our user to have access to something based on a trait associated with that user. So in the above example, instead of assigning each user who has been subscribed for over a year a special role for that, we'd just look at the created_at field associated with that user and allow or deny access based on that. Graph-based Access Control (GBAC) Graph-based access control is where we allow access to something based on how that data relates to other data. In the blog example, we might want to allow a guest author access to the posts section of the dashboard, but only let them view and edit posts that they wrote. In this case, we need to check the relationship between a post and the user before allowing access. Usually we'll have an author_id field on the post that will link back to the id field of a user. If that relationship exists, we can grant post editing access for that user. If not, we can simply deny them on the spot. So what does this have to do with a graph structured data? Modeling our data in a graph-like structure is a great way to bring flexibility to our data and really make relationships the top priority. As applications become increasingly complex, it gets much harder to manage roles and permissions. "Graph-based access control is where we allow access to something based on how that data relates to other data." Think of an application as massive as Facebook. You allow your profile to be viewed by your friends and also friends of friends. Some user clicks to view your profile, so now Facebook needs to run a query to search through all of that person's friends and then search all of friends of those friends before it can authorize them to view your profile. That's a lot of work! By modeling these relationships in a graph, we can just select out any point in the graph and "hop" to the next data point to see that relationship. Then we just define rules that use those relationships or data attributes to make authorization decisions. That's exactly what we're going to do in this article. Why Flask and GraphQL We're going to be using Flask as our application backend. Flask is a simple and flexible web framework for Python that provides us with tools that will make our application development much faster and easier. Flask is super lightweight, so it's a perfect choice for a simple but extensible backend. We'll also be using GraphQL to build a simple API. GraphQL is a query language that allows us to create APIs or work with existing APIs. It was created by Facebook and publicly released in 2015. Since then it's been gaining a ton of traction from both individual developers and also big companies such as Airbnb and Shopify. "Flask is a simple and flexible web framework for Python!" Instead of your data being accessed in a table format, imagine it's a graph. Once we establish what data points are exposed in that graph in the backend, we can use the frontend to pick and choose exactly what data we want and how we want it formatted. The client query dictates the response. Some notable features: - It can be used with any language or database - Can be used on top of existing REST APIs - Gives the client side more control Prerequisites Because Flask is a web framework for Python, we need to have Python on our machines. You can check if it's installed on your system by opening your terminal and running: python --version If a version is not returned, you're going to need to download and install the latest version from the Python website. For this tutorial, we'll be using Python 3. Next we're going to be using Pip, which is a package manager for Python, similar to npm. If you downloaded a recent version of Python 3 (3.4 or higher), pip should have been installed with it. If not, you can install it here. You can double check if it's installed with: pip --version Setting Up our Application Before we jump into GraphQL, let's setup our Flask backend and then get our database ready for querying. First things first, let's setup our project directory. Create a folder called flask-quidditch-manager. Then enter into that folder and we'll create our first file, which will serve as the entry point for our application app.py. You can do this in your terminal with the following commands: mkdir flask-quidditch-manager cd flask-quidditch-manager touch app.py You can open your preferred code editor now and let's get started with Python. Creating a virtual environment Whenever you're creating a Python project that requires external packages, it's a good idea to create a virtual environment. This will keep all of our dependencies isolated to that specific project. If we just installed every package globally on our system, we could eventually run into problems if we had a scenario where two different projects required different versions of the same package. So let's setup our virtual environment now. If you're on Python 3, the module venv should already be installed on your system. If not, you can find installation instructions here. Make sure you're in the project folder flask-quidditch, and run the following command: For Mac/Linux: python3 -m venv env For Windows: py -3 -m venv env This will create a folder in your project folder called env where we can store all of our dependencies. Next we just need to activate it. All you have to do is run the activate script that's inside the folder we created. In this case it's located at env/Scripts/activate. The env part of the path will be replaced by whatever you named the environment. If you're on Windows use: env\Scripts\activate If you're on Mac or Linux use: . env/bin/activate Your terminal should now look similar to this: Whenever you're ready to exit the environment, just run deactivate in your terminal. Setting up Flask Now that we have a virtual environment to store our dependencies, we're finally ready to setup Flask! In your terminal run: pip install flask This creates a site-packages folder nested inside your env folder. Now let's setup a basic skeleton app. Open up your empty app.py file and paste in the following: # app.py # import flask from flask import Flask # initialise flask object app = Flask(__name__) # Create home route @app.route('/') def home(): return 'Hello world' if __name__ == '__main__': app.run(debug=True) The first thing we're doing here is importing Flask. Next we're creating a Flask instance called app. In the next line, we're creating a basic home route that returns Hello world. when called. This is just for testing purposes to make sure our server is running. The last line is actually how we'll start up our server. We're passing debug=True so that we don't have to restart the server every time we make a change to our code. Let's start it up now to make sure everything is working properly! python app.py Now if you go to localhost:5000 in your browser, you should be greeted with Hello World. Setting Up our Database Next up let's create our database! We're going to have three tables: players, teams, and games, so let's see how we can create those. SQLite and SQLAlchemy We'll be using SQLite for our application's database. It's a lightweight database that's great for small applications such as our Quidditch Manager. While Python comes with built-in support for SQLite, it can be a bit tedious to work with if you need to write a lot of SQL queries. To make things easier, we're going to be using SQLAlchemy, which is an ORM (object-relational mapper) that will help us to interact with our database. In your terminal, run the following pip command: pip install sqlalchemy Create a new file called database.py. Paste in the following code and then we'll go over it. from sqlalchemy import create_engine from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import scoped_session, sessionmaker # Database setup # Sqlite will just come from a file engine = create_engine('sqlite:///quidditch.db') db_session = scoped_session(sessionmaker(autocommit=False, autoflush=False, bind=engine)) Base = declarative_base() Base.query = db_session.query_property() First we need to import create_engine from the sqlalchemy package. Next we're going to create our database file using create_engine. This is the starting point for our SQLite database. Then we create a session so we can interact with the database. Next we construct a base class Base for our class definitions (we'll use this later when we're creating our models). Finally Base.query is going to be required for making our queries later on. Creating our models Next up we're going to create our models. A model is a class that represents the structure of our data. Each model will map to a table in our database and include information such as the field name, type, if it's nullable, and more. We can also define any relationships between our tables in our model classes. Create a new file called models.py. Our database will have three tables, players, teams, and games, so we're going to have one class for each of them. In models.py, paste in the following chunk of code and then we'll walk through all of it to see what's going on here. # models.py from database import Base from sqlalchemy import Column, ForeignKey, Integer, String from sqlalchemy.orm import backref, relationship # Create our classes, one for each table class Team(Base): __tablename__ = 'teams' id = Column(Integer, primary_key=True) name = Column(String(50)) rank = Column(Integer, nullable=False) players = relationship('Player', backref='on_team', lazy=True) def __repr__(self): return '<Team %r>' % self.name class Player(Base): __tablename__ = 'players' id = Column(Integer, primary_key=True) name = Column(String(50), nullable=False) position = Column(String(50), nullable=False) year = Column(Integer, nullable=False) team_id = Column(Integer, ForeignKey('teams.id'), nullable=False) def __repr__(self): return '<Player %r>' % self.name class Game(Base): __tablename__ = 'games' id = Column(Integer, primary_key=True) level = Column(String(30), nullable=False) child_id = Column(Integer, ForeignKey('games.id'), nullable=True) winner_id = Column(Integer, ForeignKey('teams.id'), nullable=False) loser_id = Column(Integer, ForeignKey('teams.id'), nullable=False) child = relationship('Game', remote_side=[id]) def __repr__(self): return '<Game %r>' % self.winner_id Here's the basic flow of what we're doing: - Importing the base class, Base, for filling out our tables - Define our three classes: Team, Player, and Game(note that they're all singular) - Set the table name for each class - Create variables that represent each column of each table - Specify any attributes required of that column - Define any relationships between tables - Define what get's returned if we call this class These classes are basically the lifeblood of our database. When we create our database in the next step, it's going to setup the tables and columns exactly how we told it to in this file. Just a quick sidenote, you may have noticed in the teams table we have a players column that's using db.relationship(). This is how we create a one to many relationship using SQLAlchemy. All we're saying is that one team can have many players, but a player can only belong to one team. These relationships are important to define now so that we can model them in our graph later. You can learn more about creating relationships in SQLAlchemy in this article. Creating and Seeding our Database Now that we've done all that setup, we can finally create our database! In your terminal, type python to start a Python shell. We're first going to import the db object from our app and our classes from model.py. Then we'll just run SQLAlchemy's create_all() method to create the database. python from database import Base, engine, db_session from models import Team, Player, Game Base.metadata.create_all(bind=engine) You should now see a quidditch.db file in the root of your project folder. Our database structure now looks like this: The final step before we move onto GraphQL is populating our database with data. We'll walk through the commands you can use in the Python shell to add data to the tables, but the data itself is pretty trivial and time-consuming to enter by hand, so head to the seeder.txt file in the GitHub repo to get all the data for this example. Let's manually enter Gryffindor into the teams table. Go to your terminal and open the Python shell by typing python and then enter the following: team1 = Team(name='Gryffindor', rank=1) db_session.add(team1) db_session.commit() This is the process we'll use to add any new row into a table. We're just calling the class for that table and specifying the attributes. Note that we don't need to specify the id as that's auto-generated. Let's add our first player as well. We've already imported the classes, so we don't need to do it again. player1 = Player(name='Harry Potter', year=1, position='Seeker', on_team=team1) db_session.add(player1) db_session.commit() You may have noticed that for the last attribute we called it on_team when the column is called team_id. Because we've already defined a relationship between the players table and teams table, we can actually just use that backref we created earlier and assign it to the team1 variable we just created for the Gryffindor team. That way we don't have to go through the trouble of looking up what id Gryffindor was assigned. Pretty neat! To exit Python, just press ctrl + Z and hit enter. Getting Started with GraphQL Alright now that we've setup Flask and have our database ready to go, let's finally play with GraphQL! Integrating GraphQL with Graphene First we need to install a few dependencies so we can bring GraphQL into our application. pip install flask-graphql graphene graphene-sqlalchemy flask-graphql - Flask Package that will allow us to use GraphiQL IDE in the browser graphene - Python library for building GraphQL APIs graphene-sqlalchemy - Graphene package that works with SQLAlchemy to simplify working with our models Creating our schemas Next let's create our schema. The schema is going to represent the graph-like structure of our data so that GraphQL can know how to map it. Instead of the traditional tabular structure of data, imagine we have a graph of data. Each square in the image below represents a node and each line connecting them is considered and edge. Node - A node in a graph represents the data item itself, e.g. a player, game, or team Edge - An edge connects 2 nodes and represents the relationship between them, e.g. a player belongs to a team Each node will also have attributes associated with it. In this case we can see some of the position attributes such as Captain and Seeker, represented as ovals. If setup properly, GraphQL gives us the ability to select any tree from that graph. If we want to grab all players who are captains we can do that. If we want to grab all of the game data for a particular team, we can do that as well. GraphQL makes our data super flexible and gives the client more control over the type of data and structure of data that's returned to it. But before we can start doing any queries, we're going to have to setup our schema with the help of our models that we defined earlier. Luckily Graphene makes this pretty simple for us. Create a new file called schema.py and paste the following code in. from models import Team from models import Player from models import Game import graphene from graphene import relay from graphene_sqlalchemy import SQLAlchemyObjectType, SQLAlchemyConnectionField class PlayerObject(SQLAlchemyObjectType): class Meta: model = Player interfaces = (graphene.relay.Node, ) class TeamObject(SQLAlchemyObjectType): class Meta: model = Team interfaces = (graphene.relay.Node, ) class GameObject(SQLAlchemyObjectType): class Meta: model = Game interfaces = (graphene.relay.Node, ) class Query(graphene.ObjectType): node = graphene.relay.Node.Field() all_players = SQLAlchemyConnectionField(PlayerObject) all_teams = SQLAlchemyConnectionField(TeamObject) all_games = SQLAlchemyConnectionField(GameObject) schema = graphene.Schema(query=Query) This is a lot to digest, so let's break it down. - Import our models - Import our Graphene packages we installed earlier - For each class, tell Graphene to expose all attributes from that model - Create a query class - In the query class, define queries for getting all entries for each of the classes defined above Let's fill out that queries section a little more so we can demonstrate how to resolve more complex queries. Back in schema.py, keep everything the same, but add in the following code to the Query class: # schema.py from sqlalchemy import or_ class Query(graphene.ObjectType): node = graphene.relay.Node.Field() all_players = SQLAlchemyConnectionField(PlayerObject) all_teams = SQLAlchemyConnectionField(TeamObject) all_games = SQLAlchemyConnectionField(GameObject) # Get a specific player (expects player name) get_player = graphene.Field(PlayerObject, name = graphene.String()) # Get a game (expects game id) get_game = graphene.Field(GameObject, id = graphene.Int()) # Get all games a team has played (expects team id) get_team_games = graphene.Field(lambda: graphene.List(GameObject), team = graphene.Int()) # Get all players who play a certain position (expects position name) get_position = graphene.Field(lambda: graphene.List(PlayerObject), position = graphene.String()) # Resolve our queries def resolve_get_player(parent, info, name): query = PlayerObject.get_query(info) return query.filter(Player.name == name).first() def resolve_get_game(parent, info, id): query = GameObject.get_query(info) return query.filter(Game.id == id).first() def resolve_get_team_games(parent, info, team): query = GameObject.get_query(info) return query.filter(or_(Game.winner_id == team, Game.loser_id == team)).all() def resolve_get_position(parent, info, position): query = PlayerObject.get_query(info) return query.filter(Player.position == position).all() schema = graphene.Schema(query=Query) So what's going on here? We're adding some more complex queries that can't just rely on the models above to display their data. For example, we're expecting GraphQL to get all games a team has played, but we haven't told it how to do that. We have to create resolvers that will work with our SQLite database and get that information to be added to the graph. get_player This will allow us to request any single player by name. We're passing in the PlayerObject, so we'll have access to all attributes for that player. Now we just need to setup a function to resolve that player, meaning the actually query we do on the database to get them. We're just searching the player table until we find a player whose name is equal to the one we passed in. get_game This is similar to get player, except here we're getting a single game by id. get_team_games Here we're requesting data about all games played by a certain team. We're going to allow the client to pass in a team's id and from there they can request any information they'd like about games that team has either won or lost. When we resolve that query, we're just searching the database for any games where the team's id matches the winner_id or loser_id. Also note back in the get_team_games variable, we need to specify that we want a List of games instead of just one. get_position Our final query will allow the client to specify a player's position and then we'll return back all player's who match that position. Testing Queries with GraphiQL Now that we have our schema setup, let's see how we can actually make those queries from the client. GraphiQL GraphQL comes with this awesome IDE called GraphiQL that allows us to test our GraphQL queries directly in the browser. This will allow us to test our API calls in the same structure that the client will use. Back in app.py, let's create a new endpoint for the route /graphql. # app.py from flask import Flask from flask_graphql import GraphQLView from schema import schema # initialise flask object app = Flask(__name__) app.add_url_rule( "/graphql", view_func=GraphQLView.as_view("graphql", schema=schema, graphiql=True) ) if __name__ == '__main__': app.run(debug=True) Your final app.py file should match the above. We aren't using that original home route anymore that we created for testing purposes, so you can go ahead and delete that now. Make sure you still have your app running ( python app.py) and head on over to localhost:5000/graphql. We can enter our test queries on the left and it will immediately spit out the results on the right. This is very similar to how a client would consume our GraphQL API, so if we ever wanted to extend this example to have a frontend, these are the queries we'd use. Let's test out one of our initial queries now. Get all players with name { allPlayers { edges { node { name } } } } Think back to that graph of our data that we had above. To get all players, first we need to walk along all the lines in the graph (edges) that point to each player (nodes). Once we hit a node, we have access to all attributes of that node that were defined in our schema, which is this case is everything. Note that by convention, when we're making GraphQL queries we have to use camel case. In a normal REST API, if you did a query to get a user it might return a lot of unnecessary attributes about the user. With GraphQL, we can request exactly what we want. Let's look at another example to demonstrate this. Get all players with their name, team name, and position { allPlayers { edges { node { name position onTeam { name rank } } } } } This time we're going to request all players with their name, position, and team name. We're able to access onTeam because back when we setup our models, we defined a relationship between teams and players where we created that pseudo-column on the players table. This is how we can use it now! Instead of just getting back an id, we can request the name directly. Get a single player with their position, team name, and team rank query { getPlayer(name: "Harry Potter") { name position onTeam { name rank } } } GraphQL also lets you pass in arguments. This time we just want a single player by name and we also want to get their position, team name, and team rank. Get all players whose position is "Seeker" query { getPosition(position: "Seeker") { name onTeam { name } } } For our final example, let's get all players, but we only want those who are Seekers. Get a game and the child game associated with it query { getGame(id: 8) { level winnerId loserId child { id level winnerId loserId } } } This is a great use case for graph-structured data. In this scenario we basically have a sports bracket. We're asking for attributes of a specific game such as who won and who lost. We can also hop over to the next game node and see what the child (or results of this game) was and then get information about that game as well. Creating Auth0 Authorization Rules As mentioned at the beginning of this article, there are a few different ways we can authorize a user to have certain permissions in an application. The most widely used one is role based access control, which is where we have a user and we assign it roles. The roles then dictate the permissions that user has. This structure works fine for small simple applications, but a lot of larger applications make authorization decisions that rely heavily on either attributes of a user or the relationships a user has to data. Now that we've created our GraphQL API, we can use that flexible data to implement two different authorization models: Attribute-based Access Control and Graph-Based Access Control. Creating an ABAC rule Attribute based access control means we're authorizing our user to access something based on an attribute of that user, resources, or the request at hand.. In our quidditch example, let's say our application has special forums where all players with certain attributes can chat with each other. For example, every player who is in the same year at Hogwarts will be able to access the chat for their year. It doesn't matter what team they're on, as long as they have the same value for year. We can actually create this rule pretty easily through the Auth0 dashboard. Let's see it in action. First, sign up for a free Auth0 account here. You'll be prompted to create a tenant domain or just accept the auto-generated one. Fill out your account information and then you'll enter into the dashboard. Click on "Rules" on the left-hand side. Auth0 rules are special functions you can create that will run whenever a user logs into your application. They allow us to add information to a user's profile, ban specific users based on certain attributes, extend permissions to users, and more. Press "Create Rule" and let's make a rule that will extend a chat permission to a user based on what year they're in at Hogwarts. // Give the user permissions to access the chat for their year function (user, context, callback) { const axios = require('axios'); const name = user.name; axios({ url: '', method: 'post', data: { query: ` { getPlayer(name: "${name}") { name position year } } ` } }).then((result) => { if (result.data.data.getPlayer.year) { let playerYear = result.data.data.getPlayer.year; context.accessToken.scope = context.accessToken.scope || []; context.accessToken.scope.push([`year_${playerYear}_chat`]); return callback(null, user, context); } else return callback(new UnauthorizedError('Access denied.')); }).catch(err => { return callback(err); }); } First we're going to require axios so we can make the call to our GraphQL API. We have access to the user who's trying to access the chat through the user variable. Let's just grab the name from the user and pass that into our getPlayer query. Of course in the real world we wouldn't use name since that isn't unique, but this example is just for demonstration. Next we just need to wait for this response and when it comes back, check if that user has a year set. If so, we push the permission for access to that year's chat onto their access token's scope. Let's test that this works. Click "Try this rule" and we can run the rule with a mock user. Our user during login This is what the user object looks like before logging in. We have our user's basic information like id and name. Then in the next image we can see the user's context object, which holds information about the authentication transaction. Notice that the accessToken scope is currently empty. Click "Try" so we can run this rule against this user. After logging in Now our user is returned and if you look at the context object, we can see a year_2_chat permission has been added to the access token's scope. Denying a user This is a quick way to grant permissions dynamically. We can setup our app so that in order to access a certain year's chatroom, you must have the correct permission for that year. So if a player in her 3rd year tries to access Year 2 Chat, she will be denied. Creating a GBAC rule Next up, let's create our graph based rule. For this scenario, let's imagine that we need to restrict view access of player's profiles based on what team they're on. A player can see the profile of every other player on their team, but no one else. We want to create a rule that jumps in after a user logs in and determines what players the user will be able to see. First we'll run the getPlayer query for the user that's logging in. In that query, we'll use the onTeam relationship to pull what team the user is on. From there we can use the players relationship to grab all of the players that are on that team. This is the query and the data that we're going to use to determine what the user can access: Create a new rule with the following: function (user, context, callback) { const axios = require('axios'); if (! user.id) return callback(new UnauthorizedError('Access denied. Please login.')); axios({ url: '', method: 'post', data: { query: ` { getPlayer(name: "${user.name}") { name onTeam { name players { edges { node { name position year } } } } } } ` } }).then((result) => { if (result.data.data.getPlayer.onTeam) { context.viewablePlayers = result.data.data.getPlayer.onTeam.players.edges; return callback(null, user, context); } else return callback(new UnauthorizedError('Please join a team to see players.')); }).catch(err => { return callback(err); }); } Before/during login Harry Potter clicks the login button to get into his dashboard. The rule will run and modify the context object based on those relationships. Just for demonstration purposes to verify it's working, we'll add his list of viewable players to the context object. We could also add specific permissions based on this information as well. After logging in Harry Potter is in and now has access to these teammates: Wrap Up We've covered a lot in this post and even though it takes some work to setup, I hope you can see the value of integrating GraphQL into your application. It gives the client the power to request exactly what they want and it also can help expand the capabilities of your application's authorization flow. We can simplify this even further by using rules in Auth0's dashboard to extend permissions or assign roles based on certain attributes or relationships. Thanks for following along and be sure to leave any questions below!
https://auth0.com/blog/authorization-series-pt-3-dynamic-authorization-with-graphql-and-rules/
CC-MAIN-2020-45
en
refinedweb
Hot questions for Using Vapor in database Question: I want to write some integration tests for Vapor 3 server and I need to have clean Postgre database each time I run my tests. How can I achieve this? It seems migrations isn't the right way to go as they've been running once if database doesn't exist yet. Answer: Have a look at This requires a DB to be running before you run the tests, but it reverts all migrations at the start of each test run, which gives you a clean DB each time. (Specifically here) There's also a docker-compose.yml in the root directory for spinning up a completely isolated test environment on Linux Question: I want to create a command in which you can create a user (like database seeds). However, I cannot access a database in a command, my code is like the following: import Command import Crypto struct CreateUserCommand: Command { var arguments: [CommandArgument] { return [.argument(name: "email")] } var options: [CommandOption] { return [ .value(name: "password", short: "p", default: "", help: ["Password of a user"]), ] } var help: [String] { return ["Create a user with provided identities."] } func run(using context: CommandContext) throws -> Future<Void> { let email = try context.argument("email") let password = try context.requireOption("password") let passwordHash = try BCrypt.hash(password) let user = User(email: email, password: password) return user.save(on: context.container).map(to: Future<Void>) { user in return .done(on: context.container) } } } Like the above, I want to save users by executing a query on context.container, but I got argument type 'Container' does not conform to expected type 'DatabaseConnectable' Error. How to access to the database in a command? Answer: It seems like this might be the way to go: func run(using context: CommandContext) throws -> EventLoopFuture<Void> { let email = try context.argument("email") let password = try context.requireOption("password") let passwordHash = try BCrypt.hash(password) let user = User(email: email, password: password) return context.container.withNewConnection(to: .psql) { db in return user.save(on: db).transform(to: ()) } } Question: I want to be able to bulk add records to a nosql database in Vapor 3. This is my Struct. struct Country: Content { let countryName: String let timezone: String let defaultPickupLocation: String } So I'm trying to pass an array of JSON objects but I'm not sure how to structure the route nor how to access the array to decode each one. I have tried this route: let countryGroup = router.grouped("api/country") countryGroup.post([Country.self], at:"bulk", use: bulkAddCountries) with this function: func bulkAddCountries(req: Request, countries:[Country]) throws -> Future<String> { for country in countries{ return try req.content.decode(Country.self).map(to: String.self) { countries in //creates a JSON encoder to encode the JSON data let encoder = JSONEncoder() let countryData:Data do{ countryData = try encoder.encode(country) // encode the data } catch { return "Error. Data in the wrong format." } // code to save data } } } So how do I structure both the Route and the function to get access to each country? Answer: I'm not sure which NoSQL database you plan on using, but the current beta versions of MongoKitten 5 and Meow 2.0 make this pretty easy. Please note how we didn't write documentation for these two libraries yet as we pushed to a stable API first. The following code is roughly what you need with MongoKitten 5: // Register MongoKitten to Vapor's Services services.register(Future<MongoKitten.Database>.self) { container in return try MongoKitten.Database.connect(settings: ConnectionSettings("mongodb://localhost/my-db"), on: container.eventLoop) } // Globally, add this so that the above code can register MongoKitten to Vapor's Services extension Future: Service where T == MongoKitten.Database {} // An adaptation of your function func bulkAddCountries(req: Request, countries:[Country]) throws -> Future<Response> { // Get a handle to MongoDB let database = req.make(Future<MongoKitten.Database>.self) // Make a `Document` for each Country let documents = try countries.map { country in return try BSONEncoder().encode(country) } // Insert the countries to the "countries" MongoDB collection return database["countries"].insert(documents: documents).map { success in return // Return a successful response } } Question: I am trying to fix an error I have been getting recently when I run my Vapor project. It builds fine, but when it runs, it crashes. Here is my log: fatal error: Error raised at top level: Fluent.EntityError.noDatabase: file /Library/Caches/com.apple.xbs/Sources/swiftlang/swiftlang-800.0.58.6/src/swift/stdlib/public/core/ErrorType.swift, line 184 Current stack trace: 0 libswiftCore.dylib 0x0000000100fe7cc0 swift_reportError + 132 1 libswiftCore.dylib 0x0000000101004f50 _swift_stdlib_reportFatalErrorInFile + 112 2 libswiftCore.dylib 0x0000000100fb3370 partial apply for (_assertionFailed(StaticString, String, StaticString, UInt, flags : UInt32) -> Never).(closure #1).(closure #1).(closure #1) + 99 3 libswiftCore.dylib 0x0000000100dfb0a0 specialized specialized StaticString.withUTF8Buffer<A> ((UnsafeBufferPointer<UInt8>) -> A) -> A + 355 4 libswiftCore.dylib 0x0000000100fb32b0 partial apply for (_assertionFailed(StaticString, StaticString, StaticString, UInt, flags : UInt32) -> Never).(closure #1).(closure #1) + 144 5 libswiftCore.dylib 0x0000000100dfb5b0 specialized specialized String._withUnsafeBufferPointerToUTF8<A> ((UnsafeBufferPointer<UInt8>) throws -> A) throws -> A + 124 6 libswiftCore.dylib 0x0000000100f57af0 partial apply for (_assertionFailed(StaticString, String, StaticString, UInt, flags : UInt32) -> Never).(closure #1) + 185 7 libswiftCore.dylib 0x0000000100dfb0a0 specialized specialized StaticString.withUTF8Buffer<A> ((UnsafeBufferPointer<UInt8>) -> A) -> A + 355 8 libswiftCore.dylib 0x0000000100dfae80 _assertionFailed(StaticString, String, StaticString, UInt, flags : UInt32) -> Never + 144 9 libswiftCore.dylib 0x0000000100e1e540 swift_unexpectedError_merged + 569 10 App 0x0000000100001ef0 main + 2798 11 libdyld.dylib 0x00007fff974375ac start + 1 Program ended with exit code: 9 I am using the VaporPostgreSQL package. Here is my Package.swift: import PackageDescription let package = Package( name: "mist", dependencies: [ .Package(url: "", majorVersion: 1, minor: 2), .Package(url: "", majorVersion: 1, minor: 1) ], exclude: [ "Config", "Database", "Localization", "Public", "Resources", "Tests", ] ) And main.swift: import Vapor import VaporPostgreSQL import Auth import HTTP let drop = Droplet() let auth = AuthMiddleware(user: User.self) try drop.addProvider(VaporPostgreSQL.Provider.self) drop.preparations.append(Post.self) drop.preparations.append(User.self) drop.preparations.append(Site.self) drop.middleware.append(auth) let admin = AdminController() var site = Site(name: "", theme: "") if let retreivedSite = try Site.all().first { site = retreivedSite } else { drop.get { req in return Response(redirect: "") } } drop.get { req in return try drop.view.make("Themes/VaporDark/index", [ "posts": Node(node: JSON(Post.all().makeNode())) ]) } admin.addRoutes(to: drop) drop.resource("posts", PostController()) drop.run() My postgres version is 9.6.1 For some reason VaporPostgreSQL won't update and I think that might be part of the problem. I have tried vapor xcode, vapor build and vapor clean, but I can't get the latest version. Answer: I think the issue is here: if let retreivedSite = try Site.all().first { site = retreivedSite } else { drop.get { req in return Response(redirect: "") } } More specifically, the Site.all() call. We don't prepare the models until the run() command is called, so, to look up Site before that point, the model will need to be prepared manually. Hope this helps! Question: I would like to have a table with a string column as a primary key without having to use raw SQL syntax. Here's my fluent "preparation": static func prepare(_ database: Database) throws { try database.create("roles") { roles in roles.id("name") roles.string("readable_name") } } According to both my tests and the docs, resulting query will be similar to: CREATE TABLE `roles` (`name` INTEGER PRIMARY KEY NOT NULL, `readable_name` TEXT NOT NULL) I could not, so far, find a way to have a string (TEXT, VARCHAR, ...) as a primary key without raw SQL syntax and i would like to know whether it's possible to do it or not using the fluent query builder which comes with vapor. Answer: Support for ID types besides INT was added in Fluent 2. Question: I have built a MySQL database with multiple tables and complex relationships, but when I go through the vapor documentation, specifically, in the building the model phase, there is a method for creating the table (that my model class will interact with). static func prepare(_ database: Database) throws { try database.create("users") { users in users.id() users.string("name") } } However, I don't want to use it because the table that I already have contain foreign keys and types like DATETIME (which I don't know how to declare within the swift context.) is there a way to link my already built tables with vapor? Answer: This is somewhere Vapor (or more correctly Fluent, which is the database level of Vapor) is a bit limited. Yes, you can use your existing tables. In your prepare(_:) method, you can simply leave the implementation empty without creating a table at all. You should also leave revert(_:) empty as well. In your init(node:in:) initialiser and makeNode(context:) method, you will need to map between the column names and types in your table and the property types in your Swift model. How do I configure a Fluent/MySQL database connection without putting my password in configure.swift in Vapor 3? Question: The Vapor 3 documentation doesn't say much about database configuration other than to "register a DatabasesConfig struct to your services." Tutorials (such as this one) suggest that you implement the configuration in the App/configure.swift file like this: let mysqlConfig = MySQLDatabaseConfig( hostname: "127.0.0.1", port: 3306, username: "root", password: "root", database: "mycooldb" ) services.register(mysqlConfig) But my configure.swift file is being tracked by git, and I don't want to commit my username and password. How do I supply an external configuration file for handling the database connection? It appears that earlier versions of Vapor used JSON configuration files. Is this functionality completely gone? I can't find any mention of it in the current documentation. Answer: The most popular way to do this is using environment variables. You can set them in the Xcode scheme or the terminal: export DB_PASSWORD=root Then get it in your configuration: guard let password = Environment.get("DB_PASSWORD") else { throw Abort(.internalServerError) } Question: While doing Ray Wenderlich tutorial "Server Side Swift with Vapor: Persisting Models" I tried to add one more parameter(param) to the class Acronyms. import Vapor final class Acronym: Model { var id: Node? var exists: Bool = false var short: String var long: String var param: String init(short: String, long: String, param: String) { self.id = nil self.short = short self.long = long self.param = param } init(node: Node, in context: Context) throws { id = try node.extract("id") short = try node.extract("short") long = try node.extract("long") param = try node.extract("param") } func makeNode(context: Context) throws -> Node { return try Node(node: [ "id": id, "short": short, "long": long, "param": param ]) } static func prepare(_ database: Database) throws { try database.create("acronyms") { users in users.id() users.string("short") users.string("long") users.string("param") } } static func revert(_ database: Database) throws { try database.delete("acronyms") } } At first I run this code without one more parameter. And it works. But when i added one it fails. Error: 500The operation couldn’t be completed. (PostgreSQL.DatabaseError error 1.) My main.swift: import Vapor import VaporPostgreSQL let drop = Droplet( preparations: [Acronym.self], providers: [VaporPostgreSQL.Provider.self] ) drop.get("hello") { request in return "Hello, world!" } drop.get("version") { req in if let db = drop.database?.driver as? PostgreSQLDriver { let version = try db.raw("SELECT version()") return try JSON(node: version) } else { return "No db connection" } } drop.get("test") { request in var acronym = Acronym(short: "AFK", long: "Away From Keyboard", param: "One More Parametr") try acronym.save() return try JSON(node: Acronym.all().makeNode()) } drop.run() Answer: I assume you didn't revert the database. You changed the model's properties, so just write in terminal vapor run prepare --revert . That will revert your database and vapor will be able to create new parameter. Question: I'm trying to setup a Vapor 3 project with SQLite. In the configure.swift file, I have the following setup related to sqlite: try services.register(FluentSQLiteProvider()) ... // Get the root directory of the project // I have verified that the file is present at this path let path = DirectoryConfig.detect().workDir + "db_name.db" let sqlite: SQLiteDatabase do { sqlite = try SQLiteDatabase(storage: .file(path: path)) print("connected") // called } catch { print(error) // not called return } var databases = DatabasesConfig() databases.add(database: sqlite, as: .sqlite) services.register(databases) In the database, I have a table called posts, that I want to query and return all entries from: This is the Post implementation inside /Sources/App/Models/: final class Post: Content { var id: Int? var title: String var body: String init(id: Int? = nil, title: String, body: String) { self.id = id self.title = title self.body = body } } extension Post: SQLiteModel, Migration, Parameter { } I have also added the migration in configure.swift: var migrations = MigrationConfig() migrations.add(model: Post.self, database: .sqlite) services.register(migrations) In routes.swift, I define the posts route as follows: router.get("posts") { req in return Post.query(on: req).all() } Now, when calling localhost:8080/posts, I get: [] Have I not properly connected the database? Have I missed something? Answer: It seems like the table name generated by fluent is different than your db table name as Fluent generate the table with name same as the Model class name. In your case Add static property entity in your model class Post to define a custom table name. like this: public static var entity: String { return "posts" } Question: I'm trying to build APIs using Swift and I've chosen to use Vapor. I've created a SQLite database and am able to connect to it using a DB client. Now I want my Swift Vapor project to connect to it as well using the FluentSQLite package. I've created my database in the root folder of my project: /Users/rutgerhuijsmans/Documents/runk-3.0 My database is called runk-3.0-database The folder looks like this: I try to connect to my DB using the following configuration: import FluentSQLite import Vapor /// Called before your application initializes. public func configure(_ config: inout Config, _ env: inout Environment, _ services: inout Services) throws { /// Register providers first try services.register(FluentSQLiteProv sqlite: SQLiteDatabase? do { sqlite = try SQLiteDatabase(storage: .file(path: "runk-3.0-database")) print("data base connected") // This gets printed /// Register the configured SQLite database to the database config. var databases = DatabasesConfig() databases.add(database: sqlite!, as: .sqlite) services.register(databases) /// Configure migrations var migrations = MigrationConfig() migrations.add(model: User.self, database: .sqlite) services.register(migrations) } catch { print("couldn't connect") // This doesn't get printed } } What am I doing wrong? Answer: As IMike17 explained, your code just creates the new DB file into the Build/Products/Debug or release folder. You have to set full path dynamically as below: do { let directory = DirectoryConfig.detect() let filePath = directory.workDir + "runk-3.0-database" sqlite = try SQLiteDatabase(storage: .file(path: filePath)) ...... Question: I am writing a server using Swift 4 + Vapor framework, Fluent ORM and PostgreSQL as a driver. I've got a User Model which should have subscribers and subscriptions (which are also User Models). I have two options here: 1. store arrays with unique ids of subscriptions/subscribers or 2. build a one-to-many User-User relation. Which one do you think is better and how can I impemplement it? Answer: Storing an array is not optimal. Querying your database to find all a User's subscribers will require parsing every User's subscriptions array and finding those which contain your target User's ID. A relation is a better idea. Fluent uses the Pivot class to model many-to-many relations. Because it's a self-referencing relation, to avoid ID key conflicts you will probably find it easiest to create your own 'through' model. import FluentProvider import Vapor final class Subscription: Model, PivotProtocol { typealias Left = User typealias Right = User var subscriberId: Identifier var subscribedId: Identifier init( subscriberId: Identifier, subscribedId: Identifier ) { self.subscriberId = subscriberId self.subscribedId = subscribedId } let storage = Storage() static let leftIdKey = "subscriber_id" static let rightIdKey = "subscribed_id" init(row: Row) throws { subscriberId = try row.get("subscriber_id") subscribedId = try row.get("subscribed_id") } func makeRow() throws -> Row { var row = Row() try row.set("subscriber_id", subscriberId) try row.set("subscribed_id", subscribedId) return row } } extension User { var subscribers: Siblings<User, User, Subscription> { return siblings(localIdKey: "subscriber_id", foreignIdKey: "subscribed_id") } var subscribed: Siblings<User, User, Subscription> { return siblings(localIdKey: "subscribed_id", foreignIdKey: "subscriber_id") } } Question: Is this possible to use in-memory FluentSQLite provider for testing purpose and FluentPostgreSQL for app's models? Answer: It depends.... In short for simple apps yeah you can. You basically need to make your models generic and then set up the generic models from your configuration all the way down. See how the benchmark models are set up here. In reality - no you can't. As soon as you want to do anything that isn't standard ( TEXT column type) etc, you need to make your models specific to the DB type. The way to do it is to use the repository pattern and completely abstract away your database from your application logic. See the Vapor style guide for more details. Question: I'm trying to create a user and access token record in my database. However I can't figure out how to do this. My code looks like this: // Create new user func create(_ req: Request) throws -> Future<AccessToken> { return try req.content.decode(User.self).flatMap { user in user.pushToken = "" user.create(on: req).map {_ -> EventLoopFuture<AccessToken> in let accessToken = AccessToken(accessToken: UUID().uuidString, userID: user.id!) return accessToken.create(on: req) } } } I create a user (this works well) then I want to create an access token tied to that user (through user ID) Because of this I need to know the user ID of the user that I just created. However this code doesn't seem to compile. Xcode is giving me: Missing return in a closer expected to return EventLoopFuture<AccessToken> Answer: Missing a return in the user.create(on: req).map {_ -> EventLoopFuture<AccessToken> in ? Question: I'm just getting started with programming and vapor and have the following querying problem. When a user makes a request to /api/trip/new to create a new trip, I want to retrieve the first driver from the database and assign their ID to a Trip object. Then I want to save both the trip and trip request in the database. I have the following code: func newTripHandler(_ req: Request, data: TripRequestCreateData) throws -> Future<TripRequest> { let passenger = try req.requireAuthenticated(Passenger.self) let tripRequest = try TripRequest(pickupLat: data.pickupLat, pickupLng: data.pickupLng, destLat: data.destLat, destLng: data.destLng, pickupAddress: data.pickupAddress, destAddress: data.destAddress, passengerId: passenger.requireID()) //Passenger's location let lat = tripRequest.pickupLat let lng = tripRequest.pickupLng //Passenger's destination let toLat = tripRequest.destLat let toLng = tripRequest.destLng let trip = Trip(passengerId: tripRequest.passengerId!, pickupLat: lat, pickupLng: lng, destLat: toLat, destLng: toLng, pickupAddress: tripRequest.pickupAddress, destAddress: tripRequest.destAddress) let driver = Driver.query(on: req).first().unwrap(or: Abort(.notFound)) let driverId = driver.map(to: Driver.self) { driver in return driver.id } trip.driverId = driverId print(trip) return tripRequest.save(on: req) } struct TripRequestCreateData: Content { let pickupLat: Double let pickupLng: Double let destLat: Double let destLng: Double let pickupAddress: String let destAddress: String } I’m not sure how to save two models and how to properly retrieve the driver from the database. The driverId constant is of type EventLoopFuture instead of Driver so I can't assign the ID to trip's driverId property. How can I adjust my code to achieve this? Answer: When working with the database, you only get Futures. (More about Futures in The Vapor Docs.) So to get the driver, you already used the correct code, but have to go on inside the map codeblock. When returning a Future inside the map, use flatMap, so you don't have a Futures inside Futures. Try this: return Driver.query(on: req).first().unwrap(or: Abort(.notFound)).flatMap { driver in trip.driverId = driver.id return trip.save(on: req).flatMap { trip in return tripRequest.save(on: req) } } (I chose this way to show how Futures work - there are some ways to make this code cleaner, e.g. chaining Futures or using flatten.) Question: // This is a continuation of the questions I have asked from a tutorial by Paul Hudson on youtube - I have tried to add items to a database (see image below) - When I should click on the "Add" button on the image above, the boxes should become EMPTY (See image below). Though .Quantum Pizza will not be added to the list of .Statin Island Pizza and .Country pizza, because I have not done further coding), but it should be as the image below - but, the result is as follows - Now, I am posting the codes ----- configure.swift - import Fluent import FluentSQLite import Vapor import Leaf // added public func configure(_ config: inout Config, _ env: inout Environment, _ services: inout Services) throws { // Register routes to the router let router = EngineRouter.default() try routes(router) services.register(router, as: Router.self) let leafProvider = LeafProvider() // added try services.register(leafProvider) // added config.prefer(LeafRenderer.self, for: ViewRenderer.self)// added let directoryConfig = DirectoryConfig.detect() services.register(directoryConfig) try services.register(FluentSQLiteProvider()) var databaseConfig = DatabasesConfig() let db = try SQLiteDatabase(storage: .file(path:"\(directoryConfig.workDir)pizza.db")) databaseConfig.add(database: db, as: .sqlite) services.register(databaseConfig) var migrationConfig = MigrationConfig() migrationConfig.add(model: Pizza.self, database: .sqlite) services.register(migrationConfig) let serverConfigure = NIOServerConfig.default(hostname: "0.0.0.0", port: 9090) services.register(serverConfigure) } routes.swift - import Routing import Vapor import FluentSQLite public func routes(_ router: Router) throws { router.get { req -> Future <View> in let Newyorker = Pizza(id: 5, name: "Statin Island Pizza", description: "Impractical Jokers Funny Pizza", price: 55) let Traditional = Pizza(id: 5, name: "Country Pizza ", description: "Johny Cash Special", price: 55) return try req.view().render("welcome",["pizza":[Newyorker,Traditional]]) } router.post(Pizza.self, at: "add") { req, pizza -> Future<Response> in return pizza.save(on:req).map(to:Response.self) { Pizza in return req.redirect(to: "/") } } } pizza.swift - import Foundation import Vapor import FluentSQLite struct Pizza: Encodable, Content, Decodable, SQLiteModel, Migration { var id: Int? var name: String var description: String var price: Int } leaf screenshot (I tried to paste code, but couldn't, in the correct format. So adding screeshot) - Edit 1: screenshot after I click on the Add button - Ill be happy to provide you any further information if you need. Also, I would like to know if the title of my question should be modfied or anyhing should be added to it. Thank you. Answer: Your forms action should be action="add" (you're missing the closing quotation to close the action)
http://thetopsites.net/projects/vapor/database.shtml
CC-MAIN-2020-45
en
refinedweb
As a developer, it's important that you test user interactions within your app to make sure that your users don't encounter unexpected results or have a poor experience with your app. You can test an app's user interface (UI) manually by running the app and trying the UI. But for a complex app, you couldn't cover all the permutations of user interactions within all the app's functionality. You would also have to repeat these manual tests for different device configurations in an emulator, and on different hardware devices. When you automate tests of UI interactions, you save time, and your testing is systematic. You can use suites of automated tests to perform all the UI interactions automatically, which makes it easier to run tests for different device configurations. To verify that your app's UI functions correctly, it's a good idea to get into the habit of creating UI tests as you code. Espresso is a testing framework for Android that makes it easy to write reliable UI tests for an app. The framework, which is part of the Android Support Repository, provides APIs for writing UI tests to simulate user interactions within the app—everything from clicking buttons and navigating views to selecting menu items and entering data. What you should already know You should be able to: - Create and run apps in Android Studio. - Check for the Android Support Repository and install it if necessary. - Create and edit UI elements using the layout editor and XML. - Access UI elements from your code. - Add a click handler to a Button. What you'll learn - How to set up Espresso in your app project. - How to write Espresso tests. - How to test for user input and check for the correct output. - How to find a spinner, click one of its items, and check for the correct output. - How to use the Record Espresso Test function in Android Studio. What you'll do - Modify a project to create Espresso tests. - Test the app's text input and output. - Test clicking a spinner item and check its output. - Record an Espresso test of a RecyclerView. In this practical you modify the TwoActivities project from a previous lesson. You set up Espresso in the project for testing, and then you test the app's functionality. The TwoActivities app lets a user enter text in a text field and tap the Send button, as shown on the left side of the figure below. In the second Activity, the user views the text they entered, as shown on the right side of the figure below. To use Espresso, you must already have the Android Support Repository installed with Android Studio. You may also need to configure Espresso in your project. In this task you check to see if the repository is installed. If it is not, you will install it. 1.1 Check for the Android Support Repository - Download the TwoActivitiesLifecycle project from an earlier codelab on creating and using an Activity. - Open the project in Android Studio, and choose Tools > Android > SDK Manager. The Android SDK Default Preferences pane appears. - Click the SDK Tools tab and expand Support Repository. - Configure Espresso for the project When you start a project for the phone and tablet form factor using API 15: Android 4.0.3 (Ice Cream Sandwich) as the minimum SDK, Android Studio. - Open the build.gradle (Module: app) file. - Check if the following is included (along with other dependencies) in the dependenciessection: testImplementation 'junit:junit:4.12' androidTestImplementation 'com.android.support.test:runner:1.0.1' androidTestImplementation 'com.android.support.test.espresso:espresso-core:3.0.1' If the file doesn't include the above dependency statements, enter them into the dependencies section. - Android Studio also adds the following instrumentation statement to the end of the defaultConfigsection of a new project: testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner" If the file doesn't include the above instrumentation statement, enter it at the end of the defaultConfig section. the lesson on installing and running You write Espresso tests based on what a user might do while interacting with your app. The Espresso tests are classes that are separate from your app's code. You can create as many tests as you need in order to interact with the View elements state of the Viewto. With Espresso you use the following types of Hamcrest expressions to help find View elements and interact with them: - ViewMatchers: Hamcrest matcher expressions in the ViewMatchersclass that lets you find a Viewin the current Viewhierarchy so that you can examine something or perform some action. - ViewActions: Hamcrest action expressions in the ViewActionsclass that lets you perform an action on a Viewfound by a ViewMatcher. - ViewAssertions: Hamcrest assertion expressions in the ViewAssertionsclass that lets you assert or check the state of a Viewfound by a ViewMatcher. The following shows how all three expressions work together: - Use a ViewMatcherto find a View: onView(withId(R.id.my_view)) - Use a ViewActionto perform an action: .perform(click()) - Use a ViewAssertionto check if the result of the action matches an assertion: .check(matches(isDisplayed())) The following shows how the above expressions are used together in a statement: onView(withId(R.id.my_view)) .perform(click()) .check(matches(isDisplayed())); 2**.1 Run the example test** Android Studio creates a blank Espresso test class when you create the project. - In the Project > Android pane, open java > com.example.android.twoactivities (androidTest), and open ExampleInstrumentedTest. The project is supplied with this example test. You can create as many tests as you wish in this folder. In the next step you will edit the example test. - In the Project > Android pane, open java > com.example.android.twoactivities (androidTest), and open ExampleInstrumentedTest. -". 2.2 Define a class for a test and set up the Activity You will edit the example test rather than add a new one. To make the example test more understandable, you will rename the class from ExampleInstrumentedTest to ActivityInputOutputTest. Follow these steps: - Right-click (or Control-click) ExampleInstrumentedTest in the Project > Android pane, and choose Refactor > Rename. - Change the class name to ActivityInputOutputTest, and leave all options checked. Click Refactor. - Add the following to the top of the ActivityInputOutputTestclass, before the first @Testannotation: or ServiceTestRule. The rule above uses an ActivityTestRuleobject, which provides functional testing of a single Activity—in this case, MainActivity.class. During the duration of the test you will be able to manipulate your Activitydirectly, using ViewMatchers, ViewActions, and ViewAssertions. @Test:. In the above statement, ActivityTestRule may turn red at first, but then Android Studio adds the following import statement automatically: import android.support.test.rule.ActivityTestRule; 2.3 Test switching from one Activity to another The TwoActivities app includes: MainActivity: Includes the button_mainbutton for switching to SecondActivityand the text_header_replyview that serves as a text heading. SecondActivity: Includes the button_secondbutton for switching to MainActivityand the text_headerview that serves as a text heading. When you have an app that switches from one Activity to another, you should test that capability. The TwoActivities app provides a text entry field and a Send button (the button_main id). Clicking Send launches SecondActivity with the entered text shown in the text_header view of SecondActivity. But what happens if no text is entered? Will SecondActivity still appear? The ActivityInputOutputTest class of tests will show that the View elements in SecondActivity appear regardless of whether text is entered. Follow these steps to add your tests to ActivityInputOutputTest: - Add the beginning of the activityLaunch()method to ActivityInputOutputTest. This method will test whether the SecondActivity Viewelements appear when clicking the Button, and include the @Testnotation on a line immediately above the method: @Test public void activityLaunch() { - Add a combined ViewMatcherand ViewActionexpression to the activityLaunch()method to locate the Viewcontaining the button_main Button, and include a ViewActionexpression to perform a click. onView(withId(R.id.button_main)).perform(click()); The onView() method lets you use ViewMatcher arguments to find View elements. It searches the View hierarchy to locate a corresponding View instance that meets some given criteria—in this case, the button_main View. The .perform(click()) expression is a ViewAction expression that performs a click on the View. In the above onView statement, onView, withID, and click may turn red at first, but then Android Studio adds import statements for them. - Add another ViewMatcherexpression to the activityLaunch()method to find the text_header View(which is in SecondActivity), and a ViewActionexpression to perform a check to see if the Viewis displayed: onView(withId(R.id.text_header)).check(matches(isDisplayed())); This statement uses the onView() method to locate the text_header View for SecondActivity and then check to see if it is displayed after clicking the button_main Button. The check() method may turn red at first, but then Android Studio adds an import statement for it. - Add similar statements to test whether clicking the button_second Buttonin SecondActivityswitchesActivity View elements appear. The test then clicks the Button in SecondActivity, and MainActivity View elements appear. The Run window (the bottom pane of Android Studio) shows the progress of the test, and when finishes, it displays "Tests ran to completion." In the left column Android Studio displays "All Tests Passed". The drop-down menu next to the Run icon in the Android Studio toolbar now shows the name of the test class ( ActivityInputOutputTest). You can click the Run icon to run the test, or switch to app in the dropdown menu and then click the Run icon to run the app. 2.4 Test text input and output In this step you will write a test for text input and output. The TwoActivities app uses the editText_main EditText for input, the button_main Button for sending the input to SecondActivity, and the TextView in SecondActivity that shows the output in the field with the id text_message. - Add another @Testannotation and a new textInputOutput()method EditText, and a ViewAction to enter the text "This is a test." It then uses another ViewMatcher to find the View with the button_main Button, and another ViewAction to click the Button. - Add a ViewMatcherto the method to locate the text_message TextViewin SecondActivity, and a ViewAssertionto see if the output matches the input to test that the message was correctly sent—it should match "This is a test."(Be sure to include the period at the end.) onView(withId(R.id.text_message)) .check(matches(withText("This is a test."))); - Run the test. As the test runs, the app starts and the text is automatically entered as input; the Button is clicked, and SecondActivity appears with the text. The bottom pane of Android Studio shows the progress of the test, and when finished, it displays "Tests ran to completion." In the left column Android Studio displays "All Tests Passed". You have successfully tested the EditText input, the Send Button, and the TextView output. 2.5 Introduce an error to show a test failing Introduce an error in the test to see what a failed test looks like. - Change the matchescheck1165307, res-name=text_message ... Other fatal error messages appear after the above, due to the cascading effect of a failure leading to other failures. You can safely ignore them and fix the test itself. Task 2 solution code See ActivityInputOutputTest.java in the Android Studio project: TwoActivitiesEspresso The Espresso onView() method finds a View that you can test. This method will find a View in the current View hierarchy. But you need to be careful—in an AdapterView such as a Spinner, the View is typically dynamically populated with child View elements at runtime. This means there is a possibility that the View a starter app you can use to test a Spinner. It shows a Spinner, with the id label_spinner, for choosing the label of a phone number (Home, Work, Mobile, and Other). It then displays the phone number and Spinner choice in a Text<< 3.1 Create the test method - Download the PhoneNumberSpinnerEspresso project and then open the project in Android Studio. - Run the app. Enter a phone number, and choose a label from the Spinneras shown on the left side of the figure below. The result should appear in the TextViewand in a Toastmessage, as shown on the right side of the figure. -); 3.2 Access the array used for the Spinner items You want the test to click each item in the Spinner based on the number of elements in the array. But how do you access the array? - Create the iterateSpinnerItems()method as publicreturning void, and assign the array used for the Spinneritems to a new array to use within the iterateSpinnerItems()method: @Test public void iterateSpinnerItems() { String[] myArray = mActivityRule.getActivity().getResources() .getStringArray(R.array.labels_array); } In the statement above, the test accesses the array (with the id labels_array) by establishing the context with the getActivity() method of the ActivityTestRule class, and getting a resources instance using getResources(). - Assign the length of the array to size, and start the beginning of a forloop using the sizeas the maximum number for a counter. int size = myArray.length; for (int i=0; i<size; i++) { 3.3 Locate Spinner items and click on them - Add an onView()statement within the forloop to find the Spinnerand click on it: // Find the spinner and click on it. onView(withId(R.id.label_spinner)).perform(click()); A user must click the Spinner itself in order to click any item in the Spinner, so your test must click the Spinner first before clicking the item. - Write an onData()statement to find and click a Spinneritem: // Find the spinner item and click on it. onData(is(myArray[i])).perform(click()); The above statement matches if the object is a specific item in the Spinner, as specified by the myArray[i] array element. If onData appears in red, click the word, and then click the red light bulb icon that appears in the left margin. Choose the following in the pop-up menu: Static import method ‘android.support.test.espresso.Espresso.onData' If is appears in red, click the word, and then click the red light bulb icon that appears in the left margin. Choose the following in the pop-up menu: Static import method...> Matchers.is (org.hamcrest) - Add another onView()statement to the forloop to check to see if the resulting text_phonelabelmatches the Spinneritem specified by myArray[i]. // Find the text view and check that the spinner item // is part of the string. onView(withId(R.id.text_phonelabel)) .check(matches(withText(containsString(myArray[i])))); If containsString appears in red, click the word, and then click the red light bulb icon that appears in the left margin. Choose the following in the pop-up menu: Static import method...> Matchers.containsString (org.hamcrest) - Run the test. The test runs the app, clicks the Spinner, and "exercises" the Spinner—it clicks each Spinner item from top to bottom, checking to see if the item appears in the text_phonelabel TextView. It doesn't matter how many Spinner items are defined in the array, or what language is used for the Spinner items—the test performs all of them and checks their output against the array. The bottom pane of Android Studio shows the progress of the test, and when finished, it displays "Tests ran to completion." In the left column Android Studio displays "All Tests Passed". Task 3 solution code See SpinnerSelectionTest.java in the Android Studio project: PhoneNumberSpinnerEspresso Android Studio lets you record an Espresso test, which is useful for generating tests quickly. While recording a test, you. To demonstrate test recording, you will record a test of the Scorekeeper app created in the practical on using drawables, styles, and themes. 4.1 Open and run the app - Download the Scorekeeper project that you created in Android fundamentals 5.1: Drawables, styles, and themes. - Open the Scorekeeper project in Android Studio. - Run the app to ensure that it runs properly. The Scorekeeper app consists of two sets of Button elements and two TextView elements, which are used to keep track of the score for any point-based game with two players. 4.2 Record the test - Choose Run > Record Espresso Test, select your deployment target (an emulator or a device), and click OK. The Record Your Test dialog appears, and the Debugger pane appears at the bottom of the Android Studio window. If you are using an emulator, the emulator also appears. - On the emulator or device, tap the plus (+) ImageButtonfor Team 1 in the app. The Record Your Test window shows the action that was recorded ("Tap AppCompatImageButton with the content description Plus Button"). - Click Add Assertion in the Record Your Test window. A screenshot of the app's UI appears in a pane on the right side of the window, and the Select an element from screenshot option appears in the dropdown menu. Select the score (1) in the screenshot as the UI element you want to check, as shown in the figure below. - Choose text is from the second dropdown menu, as shown in the figure below. The text you expect to see (1) is already entered in the field below the dropdown menu. - Click Save Assertion. - To record another click, on your emulator or device tap the minus (–) ImageButtonfor Team 1 in the app. The Record Your Test window shows the action that was recorded ("Tap AppCompatImageButton with the content description Minus Button"). - Click Add Assertion in the Record Your Test window. The app's UI appears in the right-side pane as before. Select the score (0) in the screenshot as the UI element you want to check. - Choose text is from the second dropdown menu, as shown in the figure below. The text you expect to see (0) is already entered in the field below the dropdown menu. - Click Save Assertion, and then click OK. - In the dialog that appears, edit the name of the test to ScorePlusMinusTest so that it is easy for others to understand the purpose of the test. - Android Studio may display a request to add more dependencies to your Gradle Build file. Click Yes to add the dependencies. Android Studio adds the following to the dependencies section of the build.gradle (Module: app)file:' } - Expand com.example.android.scorekeeper (androidTest) to see the test, and run the test. It should pass. Run it again, and it should pass again. The following is the test, as recorded in the ScorePlusMinusTest.java file: // ... Package and import statements @LargeTest @RunWith(AndroidJUnit4.class) public class ScorePlusMinusTest { @Rule public ActivityTestRule<MainActivity> mActivityTestRule = new ActivityTestRule<>(MainActivity.class); @Test public void scorePlusMinusTest() {()); ViewInteraction textView = onView( allOf(withId(R.id.score_1), withText("1"), childAtPosition( childAtPosition(IsInstanceOf .<View>instanceOf(android.widget.LinearLayout.class), 0), 2), isDisplayed())); textView.check(matches(withText("1"))); ViewInteraction appCompatImageButton2 = onView( allOf(withId(R.id.decreaseTeam1), withContentDescription("Minus Button"), childAtPosition( childAtPosition( withClassName(is("android.widget.LinearLayout")), 0), 1), isDisplayed())); appCompatImageButton2.perform(click()); ViewInteraction textView2 = onView( allOf(withId(R.id.score_1), withText("0"), childAtPosition( childAtPosition(IsInstanceOf .<View>instanceOf(android.widget.LinearLayout.class), 0), 2), isDisplayed())); textView2.check(matches(withText("0"))); } the ViewInteraction class, which is the primary interface for performing actions or assertions on View elements, providing both check() and perform() methods. Examine the test code to see how it works: - Perform: The code below uses a method called childAtPosition(), which is defined as a custom Matcher, and the perform()method to click an ImageButton:()); - Check whether it matches the assertion: The code below also uses the childAtPosition()custom Matcher, and checks to see if the clicked item matches the assertion that it should be "1": ViewInteraction textView = onView( allOf(withId(R.id.score_1), withText("1"), childAtPosition( childAtPosition( IsInstanceOf .<View>instanceOf(android.widget.LinearLayout.class), 0), 2), isDisplayed())); textView.check(matches(withText("1"))); - Custom Matcher: The code above uses childAtPosition(), which is defined as a custom Matcher: custom Matcher in the above code extends the abstract TypeSafeMatcher class. You can record multiple interactions with the UI in one recording session. You can also record multiple tests, and edit the tests to perform more actions, using the recorded code as a snippet to copy, paste, and edit. Task 4 solution code See ScorePlusMinusTest.java in the Android Studio project: ScorekeeperEspresso You learned how to create a RecyclerView in another practical. The app lets you scroll a list of words from "Word 1" to "Word 19". When you tap the FloatingActionButton, a new word appears in the list ("+ Word 20"). Like an AdapterView (such as a Spinner), a RecyclerView dynamically populates child View elements. Challenge: Fortunately, you can record an Espresso test of using the RecyclerView. Record a test that taps the FloatingActionButton, and check to see if a new word appears in the list ("+ Word 20"). Challenge solution code See RecyclerViewTest.java in the Android Studio project: RecyclerViewEspresso To set up Espresso to test an Android Studio project: - In Android Studio, check for and install the Android Support Repository. - Add dependenciesto the build.gradle (Module: app) file: testImplementation 'junit:junit:4.12' androidTestImplementation 'com.android.support.test:runner:1.0.1' androidTestImplementation 'com.android.support.test.espresso:espresso-core:3.0.1' - Add the following instrumentation statement to the end of the defaultConfigsection: testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner" Instrumentation is a set of control methods, or hooks, in the Android system. - On your test device, turn off animations. To do this, turn on USB Debugging. Then in the Settings app, select Developer Options > Drawing. Turn off window animation scale, transition animation scale, and animator duration scale. To test annotations: @RunWith(AndroidJUnit4.class): Create an instrumented JUnit 4 test class. @Rule: Add or redefine the behavior of each test method in a reusable way, using one of the test rule classes that the Android Testing Support Library provides, such as ActivityTestRuleor ServiceTestRule. @Test: The @Testannotation tells JUnit that the public voidmethod to which it is attached can be run as a test case. To test code: ViewMatchersclass lets you find a view in the current Viewhierarchy so that you can examine something or perform an action. ViewActionsclass lets you perform an action on a view found by a ViewMatcher. ViewAssertionsclass lets you assert or check the state of a view found by a ViewMatcher. To test a spinner: - Use onData()with a Viewthat is dynamically populated by an adapter at runtime. - Get items from an app's array by establishing the context with getActivity()and getting a resources instance using getResources(). - Use onData()to find and click each spinner item. - Use onView()with a ViewActionand ViewAssertionto check if the output matches the selected spinner item. The related concept documentation is in 6.1: UI testing. Android Studio documentation: Android developer documentation: - Test apps on Android - Fundamentals of Testing - Testing UI for a single app - Building instrumented unit tests - Espresso recipes - The Hamcrest Tutorial - Hamcrest API and Utility Classes - Test support APIs Record another Espresso test for the ScorekeeperEspresso app. This test should tap the Night Mode option in the options menu and determine whether the Day Mode option appears in its place. The test should then tap the Day Mode option and determine whether the Night Mode option appears in its place. Answer these questions Question 1 Which steps do you perform to test a View interaction, and in what order? Choose one: - Match a View, assert and verify the result, and perform an action. - Match a View, perform an action, and assert and verify the result. - Perform an action, match a view, and assert and verify the result. - Perform an action, and assert and verify the result. Question 2 Which of the following annotations enables an instrumented JUnit 4 test class? Choose one: @RunWith @Rule @Test @RunWithand @Test Question 3 Which method would you use to find a child View in an AdapterView? Choose one: onData()to load the adapter and enable the child Viewto appear on the screen. onView()to load the Viewfrom the current Viewhierarchy. onView().check()to check the current View. onView().perform()to perform a click on the current View. Submit your app for grading Guidance for graders Check that the test meets the following criteria: - The test appears in the com.example.android.scorekeeper (androidTest)folder. - The test automatically switches the app from Day Mode to Night Mode, and then back to Day Mode. - The test passes more than once. To find the next practical codelab in the Android Developer Fundamentals (V2) course, see Codelabs for Android Developer Fundamentals (V2). For an overview of the course, including links to the concept chapters, apps, and slides, see Android Developer Fundamentals (Version 2).
https://codelabs.developers.google.com/codelabs/android-training-espresso-for-ui-testing?hl=en
CC-MAIN-2020-45
en
refinedweb
During Build 2015 Microsoft announced a bunch of new tools aimed at helping developers build cross platform applications. Amongst the announcements, they let us know that ASP.NET was now available and ready to run on Mac and Linux natively. Up until this point there has been a few different ways to get .NET applications running on Unix systems but none of them were truly native or supported by Microsoft. With this announcement and the release of Visual Studio Code—Microsoft’s cross platform development tool—you can now develop cross platform .NET applications on your favourite operating system. Today I will show you how to get started with setting up your .NET development environment on a Mac running Yosemite and show you how to build a Console and an ASP.NET MVC 6 call log application using Visual Studio Code and ASP.NET 5. Feel free to download all the code from the Github repository if all you want to do is setup your local environment and not worry about writing all the application code. Our tools - A Mac computer running Yosemite or above - Homebrew package manager - Node.Js - A Twilio Account and a Twilio Phone Number – Sign up for free! Setup To get started we need to make sure all the necessary tools are installed. If you are running on a Mac and still don’t have Homebrew installed you’re missing out, so download and install it from brew.sh. Once homebrew is installed we can go ahead and download .NET Version Manager. This will install everything we need in order to run .NET as well as some useful tools we will talk about briefly. brew tap aspnet/dnx brew update brew install dnvm When the installation completes the following will have been installed for you: - DNVM (.NET Version Manager): is a set of command line instructions which allow you to configure your .NET Runtime. We can use it to specify which version of the .NET Execution Framework to use. - DNX (.NET Execution Framework): is the runtime environment for creating .NET applications for Windows, Mac and Linux. - DNU (.NET Development Utility): is a set of command line tools that allow us to build, package and publish projects created with DNX. Make sure that all DNVM goodies are available on your terminal instance by running source dnvm.sh. For any future instances just run the following to permanently add it to your bash profile or equivalent according to your environment. echo "source dnvm.sh" >> ~/.bash_profile Let’s go ahead and check that DNVM is actually installed by running the following: dnvm If you see a screen like the above you know you have DNVM properly installed, so let’s install the latest version of DNX. dnvm upgrade We know our .NET Virtual Environment is now installed, so it is time to install Yeoman. Yeoman is a command line application that generates project scaffolds. It offers generators for a plethora of programming languages including .NET. To install Yeoman first we need to make sure we have Node.js installed. You can check that by running the following on terminal. You should see your Node.js version if it’s already installed. node -v If you don’t have it installed the good news is you can also install it using homebrew by issuing the following terminal command. brew install node npm install -g yo We now need to make sure we have the Yeoman .NET generator downloaded by running: npm install -g generator-aspnet There is just one more thing we need to do, which is install a sweet IDE that will give us all the awesome functionality we get from Visual Studio, but now on a Mac. Go ahead and download and install Visual Studio Code. You can find more information about extra setup functionalities here. Building a console application With all of the necessary tools we installed let’s build a command line application to make sure everything works as expected. Start by going back to your terminal and running: yo aspnet It will prompt you to choose a project type and enter a project name. Choose Console Application and call it TwilioCallLogConsole. When you press enter, Yeoman will scaffold a console application project for you. You don’t need to run any of the other commands suggested on the screen at this point. Open up Visual studio Code and choose File > Open, and select the folder where your project was created. Or if you followed the extra instructions from the Visual Studio Code website you can just run the following: cd TwilioCallLogConsole && code . When it finishes loading you will notice a notification at the top of Visual Studio Code will appear telling you about unresolved dependencies. Ignore that until you open up the file project.json and add a dependency to the Twilio .NET Library to it. "dependencies": { "Twilio": "4.0.3" }, Go ahead and click the Restore button and all the dependencies will be installed. Once that completes we can modify our application to fetch data with Twilio’s REST API. Open Program.cs and add a reference to the Twilio library at the top. using Twilio; In that same file change the Main method to the following: public void Main(string[] args) { // Instantiate a new Twilio Rest Client var client = new TwilioRestClient("your-twilio-account-sid", "your-twilio-auth-token"); // Select all calls from my account var calls = client.ListCalls(new CallListRequest()); // Check for any exceptions if (calls.RestException != null) { throw new FormatException(calls.RestException.Message); } // Loop through them and show information foreach(var call in calls.Calls){ var detail = String.Format("From: {0}, Day: {1}, Duration: {2}s", call.From, call.DateCreated, call.Duration); Console.WriteLine(detail); } } Don’t forget to replace the account sid and auth token with the values from your account dashboard. We’re creating a new object for the Twilio Rest API, listing the calls, looping through each one of them to show who started it and when, and showing how long they took. Save this, and back on the terminal type: dnx run You can also do this straight from Visual Studio Code by hitting ⇧⌘P and typing dnx run. A new Terminal instance should open and run the application for you. Congratulations, you have just built your first .NET command line application on a Mac and the setup was much easier than it would have been on Windows. As you saw it’s pretty straightforward to build command line applications on a Mac with .NET, but how about building ASP .NET MVC apps? Stay with me as we’re just about to do that. Building a .Net MVC application Back on the terminal let’s get Yeoman to scaffold a new application of type Web Application Basic [without Membership and Authorization] and call it TwilioCallLogMVC. Now that the application layout has been scaffolded, open it up with Visual Studio Code. Before we do anything we need to make sure all the dependencies are installed, so hold ⇧⌘P and type dnu restore. Once all packages have been restored hold ⇧⌘P again and type dnx web to start your local webserver. On your browser you can now go to. We have a basic ASP .NET MVC application running on a Mac but let’s add some extra functionality to it and reproduce our command line application with it. Open up project.json and add a Twilio dependency to it. Notice Visual Studio Code finds the dependency for you automatically as soon as you start typing it. Once that’s done, make sure you run dnu restore again so the dependency is downloaded. Open Controllers/HomeController.cs and add a reference to the Twilio library at the top. using Twilio Change the Index endpoint to accept a string called phoneNumber. Then add the code needed to interact with the Twilio Rest API. Also don’t forget to replace the account sid and auth token with the values from your account dashboard. public IActionResult Index(string phoneNumber) { // Instantiate a new Twilio Rest Client var client = new TwilioRestClient("your-twilio-account-sid", "your-twilio-auth-token"); // Select all calls from my account based on a phoneNumber var calls = client.ListCalls(new CallListRequest(){To = phoneNumber}); // Check for any exceptions if (calls.RestException != null) { throw new FormatException(calls.RestException.Message); } return View(calls.Calls); } We have done one different thing here which is allowing for filtering the results. Now we need to modify the view so it knows how to display information about our calls. To do that we will bind the view to the Twilio.Call model and show the user a form where they can enter a telephone number to do the filtering. Open Views/Home/Index.cshtml and replace its contents with the following change: > } </div> This will show you a form where you can type a telephone number but we don’t have a way to display the results yet. Let’s change that by adding a table and a loop to go through the results and show one row per entry. > } <br><br> <table class="table"> <tr> <th> @Html.DisplayNameFor(model => model.To) <th> @Html.DisplayNameFor(model => model.From) </th> <th> @Html.DisplayNameFor(model => model.DateCreated) </th> <th> @Html.DisplayNameFor(model => model.Duration) </th> </tr> @foreach (var item in Model) { <tr> <td> @Html.DisplayFor(modelItem => item.To) </td> <td> @Html.DisplayFor(modelItem => item.From) </td> <td> @Html.DisplayFor(modelItem => item.DateCreated) </td> <td> @Html.DisplayFor(modelItem => item.Duration)s </td> </tr> } </table> </div> If you now run this again in your browser you should see a form asking you to enter a phone number and some data showing all your call logs. If you enter a phone number you own on the form and press search it will then filter the table to only return entries for that number. All you need at your fingertips This is how easy it can be to build .NET applications on a Mac or any other Unix environment. Even though the applications we’ve just built are fairly simple, it was fun and pleasant to build and run them on a Mac. How about trying to run your existing applications on a Mac and seeing if they’re already cross platform? Chances are you are already closer than you think. I would love to see what you come up with. Hit me up on Twitter @marcos_placona or by email on marcos@twilio.com to tell me more about it.
https://www.twilio.com/blog/2015/08/getting-started-with-asp-net-5-and-visual-studio-code-on-a-mac.html
CC-MAIN-2020-45
en
refinedweb
Created on 2010-09-24 14:31 by jayt, last changed 2019-09-12 11:21 by shihai1991. This issue is now closed. I want to create a custom interactive shell where I continually do parse_args. Like the following: parser = argparse.ArgumentParser() command = raw_input() while(True): args = parser.parse_args(shlex.split(command)) # Do some magic stuff command = raw_input() The problem is that if I give it invalid input, it errors and exits with a help message. I learned from argparse-users group that you can override the exit method like the following: class MyParser(ArgumentParser): def exit(self, status=0, message=None): # do whatever you want here I would be nice to have this usage documented perhaps along with best practices for doing help messages in this scenario. Do you want to work on a patch? (Aside: you may want to learn about the cmd and shlex modules for read-eval-print-loop programs :) I am also trying to use argparse interactively, but in this case by combining it with the cmd module. So I'm doing something like below: class MyCmd(cmd.Cmd): parser = argparse.ArgumentParser(prog='addobject') parser.add_argument('attribute1') parser.add_argument('attribute2') parser.add_argument('attribute3') def do_addobject(self, line): args = MyCmd.parser.parse_args(line.split()) newobject = object(args.attribute1, args.attribute2, args.attribute3) myobjects.append(newobject) I'm faced with the same problem that when given invalid input, parse_args exits the program completely, instead of exiting just to the Cmd shell. I have the feeling that this use case is sufficiently common such that it would be good if people did not have to override the exit method themselves, and instead an alternative to parse_args was provided that only raises exceptions for the surrounding code to handle rather than exiting the program entirely. You can always catch SystemExit. In the short term, just catch the SystemExit. In the slightly longer term, we could certainly provide a subclass, say, ErrorRaisingArgumentParser, that overrides .exit and .error to do nothing but raise an exception with the message they would have printed. We'd probably have to introduce a new Exception subclass though, maybe ArgumentParserExit or something like that. Anyway if you're interested in this, please file a new ticket (preferably with a patch). Regardless of whether we ever provide the subclass, we certainly need to patch the documentation to tell people how to override error and exit. I don't think it's best to create a new subclass to throw an ArgumentParserExit exception; if I read the stack trace I'd see that an ArgumentError was thrown, then caught, then an ArgumentParserExit was thrown, which IMHO is confusing. In the current design, parse_known_errors catches an ArgumentError and then exits. I propose that the user be optionally allowed to turn off the handling of ArgumentError and to handle it himself instead through an exit_on_argument_error flag. Attached patch does this. Also I think this issue falls under component 'Lib' too. FWIW unittest had a similar issue and it's been solved adding an 'exit' argument to unittest.main() [0]. I think using an attribute here might be fine. The patch contains some trailing whitespace that should be removed, also it might be enough to name the attribute "exit_on_error". It should also include tests to check that the attribute is set with the correct default value and that it doesn't raise SystemExit when the attribute is False. [0]: Updated previous patch with test cases and renamed exit_on_argument_error flag to exit_on_error. Looks good to me. What is the status of this? If the patch looks good, then will it be pushed into 3.4? It's great that this patch was provided. Xuanji, can you submit a contributor agreement, please? The patch is missing an update to the documentation. (Really the patch should have been in a separate issue, as requested, since this one is about improving the documentation for the existing released versions. I guess we'll have to open a new issue for updating the docs in the existing versions). The patch doesn't work for 3.3 (I think it's just because the line numbers are different), but looking over what the patch does, it looks like parse_known_args will return a value for args if there is an unrecognized argument, which will cause parse_args to call error() (it should raise ArgumentError instead). It doesn't look like xuanji has signed a CLA. Should we create a new issue, and have someone else create a new patch, and let this issue just be about the docs? Yes, I think opening a new issue at this point might be a good idea. The reason is that there are a changes either in place or pending in other issues that involve the parse_know_args code, so a new patch is probably required regardless. I wish I had time to review and commit all the argparse patches, but so far I haven't gotten to them. They are on my todo list somewhere, though :) The exit and error methods are mentioned in the 3.4 documentation, but there are no examples of modifying them. 16.4.5.9. Exiting methods ArgumentParser.exit(status=0, message=None) ArgumentParser.error(message) test_argparse.py has a subclass that redefines these methods, though I think it is more complex than necessary. class ErrorRaisingArgumentParser(argparse.ArgumentParser): In , part of , which creates a parser mode that is closer to optparse in style, I simply use: def error(self, message): usage = self.format_usage() raise Exception('%s%s'%(usage, message)) ArgumentParser.error = error to catch errors. a Javascript port of argparse, adds a 'debug' option to the ArgumentParser, that effectively redefines this error method. They use that extensively in testing. Another approach is to trap the sysexit. Ipython does that when argparse is run interactively. Even the simple try block works, though the SystemExit 2 has no information about the error. try: args = parser.parse_args('X'.split()) except SystemExit as e: print(e) Finally, plac ( ) is a pypi package that is built on argparse. It has a well developed interactive mode, and integrates threads and multiprocessing. I would like to send a patch for the issue. How do I start This issue is a duplicate of issue 9112 which was resolved by commit 9375492b It is a good idea. So I update this title and add PR 15362. I am not sure there have a problem of xuanli's CLA or not~ New changeset f545638b5701652ffbe1774989533cdf5bc6631e by Miss Islington (bot) (Hai Shi) in branch 'master': bpo-9938: Add optional keyword argument exit_on_error to argparse.ArgumentParser (GH-15362) Thank you for your PR and for your time, I have merged the PR into master. Stéphane, thanks for your good comment. Some argparse's bpo is too old ;)
https://bugs.python.org/issue9938
CC-MAIN-2020-45
en
refinedweb
Why overriding sessionId by forcing yours ? Just add another variable send via the state or message, something like userId. Session should be volatile, but user id should be persistent. SauceCodeFr @SauceCodeFr Posts made by SauceCodeFr - RE: Manually changing the sessionId / persistent character Why overriding - RE: Manually changing the sessionId / persistent character This behaviour is normal, a session id means not an id attribued to your user, just an id attribued to his session. If you want to keep an id across browser restarts you must implement authentication ! And store data assigned to the user email address or something. You may also want to use for retreiving user informations. But it need to store this in another database than your room state. - RE: Manually changing the sessionId / persistent character Hi ! If your room is set to autoDispose = false() the state is kept, so can keep your character data in. And then with a reconnection, the data can be retreived with the session id. but the data will still be lost if the Roomis closed and you cant change that... A room is a temporary instance of your game, thats why you should store your informations on a dedicated database :) - RE: Store data in a Quadtree Hi ! I have done HashMaps but not Quadtree, however I think this is possible as you can nest objects into custom structure, I did not tested but recursivity may not be a problem. Something like (not tested, just a draft) : import { Schema, type } from "@colyseus/schema"; class Node extends Schema { @type([Node]) nodes = new ArraySchema<Node>(); @type([Entity]) entities = new ArraySchema<Entity>(); } class MyState extends Schema { @type(Node) node: Node = new Node(); } On each Nodeyou may have 4 child Nodes, each nodes can have 0 to many Entity. It may be a good idea to write a NodeManager attached to the Statefor building and update the tree ! - RE: Adding / Removing Schema fields during runtime Okay I am working on something ! The solution may be having a global hashmap of Componentdetached from Entityobjects. It allows us to group the Componenttypes in the State and send them as group. Not sure it will works but, at least i will try this :) I will keep updated the work on this thread. - Adding / Removing Schema fields during runtime Hi, I am working on a opensource ecs game engine and I wanted to replace the current custom network management with Colyseus ! But, ECS means that my game is composed by several Entitythat can have 0 to many components. A Componentis assigned to an Entityby putting him into a Entity.componentsobject holding them by their names. I am not very familiar with the Schema aspect, how can I bring this data structure into Colyseus ? Idea 1 : Define types during runtime with defineTypes, something like : const entity = new Entity() // Entity extends empty Schema entity.addComponent(new MyComponent()) // MyComponent extends Schema with data structure // The second line will run this : schema.defineTypes(entity , { MyComponent: MyComponent }); But, well ... Schema defines types in object prototype () not instance ... so it will not work as each Entityinstance may have different Components. Idea 2 : Sending maps of Componentsfor each Entity(preferred solution as I already have a Entity.componentsattributes. class Entity extends Schema { @type({ map: Components}) components = new MapSchema<Components>(); } But again I think it will not work because the engine will store different Componentdata structures. Examples : PositionnableComponentwith x, y, angle, HealthComponentwith current, maximum, etc ...
https://discuss.colyseus.io/user/saucecodefr
CC-MAIN-2020-45
en
refinedweb
Hot questions for Using Vapor in kitura Question: I know the question has been asked before and I agree with most answers that claim it is better to follow the way requests are made async with URLSession in Swift 3. I haver the following scenario, where async request cannot be used. With Swift 3 and the ability to run swift on servers I have the following problem. - Server Receives a request from a client - To process the request the server has to send a url request and wait for the response to arrive. - Once response arrives, process it and reply to the client The problem arrises in step 2, where URLSession gives us the ability to initiate an async data task only. Most (if not all) server side swift web frameworks do not support async responses. When a request arrives to the server everything has to be done in a synchronous matter and at the end send the response. The only solution I have found so far is using DispatchSemaphore (see example at the end) and I am not sure whether that will work in a scaled environment. Any help or thoughts would be appreciated. extension URLSession { func synchronousDataTaskWithURL(_ url: URL) -> (Data?, URLResponse?, Error?) { var data: Data? var response: URLResponse? var error: Error? let sem = DispatchSemaphore(value: 0) let task = self.dataTask(with: url as URL, completionHandler: { data = $0 response = $1 error = $2 as Error? sem.signal() }) task.resume() let result = sem.wait(timeout: DispatchTime.distantFuture) switch result { case .success: return (data, response, error) case .timedOut: let error = URLSessionError(kind: URLSessionError.ErrorKind.timeout) return (data, response, error) } } } I only have experience with kitura web framework and this is where i faced the problem. I suppose that similar problems exist in all other swift web frameworks. Answer: Your three-step problem can be solved via the use of a completion handler, i.e., a callback handler a la Node.js convention: import Foundation import Kitura import HeliumLogger import LoggerAPI let session = URLSession(configuration: URLSessionConfiguration.default) Log.logger = HeliumLogger() let router = Router() router.get("/test") { req, res, next in let datatask = session.dataTask(with: URL(string: "")!) { data, urlResponse, error in try! res.send(data: data!).end() } datatask.resume() } Kitura.addHTTPServer(onPort: 3000, with: router) Kitura.run() This is a quick demo of a solution to your problem, and it is by no means following best Swift/Kitura practices. But, with the use of a completion handler, I am able to have my Kitura app make an HTTP call to fetch the resource at, wait for the response, and then send the result back to my app's client. Link to the relevant API:
http://thetopsites.net/projects/vapor/kitura.shtml
CC-MAIN-2020-45
en
refinedweb
Combining several datasets into a global consistent model is usually performed using a technique called registration. The key idea is to identify corresponding points between the data sets and find a transformation that minimizes the distance (alignment error) between corresponding points. This process is repeated, since correspondence search is affected by the relative position and orientation of the data sets. Once the alignment errors fall below a given threshold, the registration is said to be complete. The pcl_registration library implements a plethora of point cloud registration algorithms for both organized an unorganized (general purpose) datasets. #include <pcl/registration/gicp6d.h> GeneralizedIterativeClosestPoint6D integrates L*a*b* color space information into the Generalized Iterative Closest Point (GICP) algorithm. The suggested input is PointXYZRGBA. constructor. Provide a pointer to the input source (e.g., the point cloud that we want to align to the target) Provide a pointer to the input target (e.g., the point cloud that we want to align the input source to) Rigid transformation computation method with initial guess. Holds the converted (LAB) data cloud. Holds the converted (LAB) model cloud. 6d-tree to search in model cloud. The color weight. Custom point representation to perform kdtree searches in more than 3 (i.e. in all 6) dimensions. Enables 6d searches with kd-tree class using the color weight. Definition at line 79 of file gicp6d.h. References pcl::geometry::distance(), pcl::PointRepresentation< PointT >::nr_dimensions_, PCL_EXPORTS, and pcl::PointRepresentation< PointT >::trivial_.
https://pointclouds.org/documentation/group__registration.html
CC-MAIN-2020-45
en
refinedweb
An application layer protocol for the establishment of decentralized democracy. Project description BitGov BitGov is an application layer protocol built with the Python socket module. It piggybacks on the layer four Transmission Control Protocol (TCP) in combination with the IPv4 address family. Installation To use the BitGov protocol in your own application you may use one of the two methods listed below. Conda (recommended) Assuming you have Anaconda installed, active the environment in which you want to install the package. conda activate <environment> If you want to create a new environment first use conda create --name <environment> or if you want to see a list of all your conda environments, conda info --envs. Next, install the package. conda install -c jgphilpott bitgov Note: It may be necessary to first add the channel 'jgphilpott'. To do this use conda config --add channels jgphilpott. To view a list of all your conda channels, conda config --get channels. All done! You should now be able to see the package listed in your environment, conda list. Pip (alternative) First, navigate into your project directory and active the environment in which you want to install the package. source <environment>/bin/activate If you want to create a new environment first use virtualenv <environment>. Next, install the package. pip install bitgov Note: If you want to install the package locally and not in any specific environment use pip install bitgov --user. All done! You should now be able to see the package listed in your environment, pip list. Usage To get started with BitGov create a Python file for your application and import the package. import bitgov That's it! Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/bitgov/
CC-MAIN-2020-45
en
refinedweb
select_attach() Attach a file descriptor to a dispatch handle Synopsis: #include <sys/iofunc.h> #include <sys/dispatch.h> int select_attach ( void *dpp, select_attr_t *attr, int fd, unsigned flags, int (*func)( select_context_t *ctp, int fd, unsigned flags, void *handle ), void *handle ); Arguments: - dpp - The dispatch handle, as returned by dispatch_create() , that you want to attach to a file descriptor. - attr - A pointer to a select_attr_t structure. This structure is defined as: typedef struct _select_attr { unsigned flags; } select_attr_t; Currently, no attribute flags are defined. - fd - The file descriptor that you want to attach to the dispatch handle. - flags - Flags that specify the events that you're interested in. For more information, see Flags, below. - func - The function that you want to call when the file descriptor unblocks. For more information, see Function, below. - handle - A pointer to arbitrary data that you want to pass to func. Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description: The function select_attach() attaches the file descriptor fd to the dispatch handle dpp and selects flags events. When fd unblocks, func is called with handle. Flags The available flags are defined in <sys/dispatch.h>. The following flags use ionotify() mechanisms (see ionotify() for further details): - SELECT_FLAG_EXCEPT - Out-of-band data is available. The definition of out-of-band data depends on the resource manager. - SELECT_FLAG_READ - There's input data available. The amount of data available defaults to 1. For a character device such as a serial port, this is a character. For a POSIX message queue, it's a message. Each resource manager selects an appropriate object. - SELECT_FLAG_WRITE - There's room in the output buffer for more data. The amount of room available needed to satisfy this condition depends on the resource manager. Some resource managers may default to an empty output buffer, while others may choose some percentage of the empty buffer. These flags are specific to dispatch: - SELECT_FLAG_REARM - Rearm the fd after an event is dispatched. - SELECT_FLAG_SRVEXCEPT - Register a function that's called whenever a server, to which this client is connected, dies. (This flag uses the ChannelCreate() function's _NTO_CHF_COID_DISCONNECT flag. In this case, fd is ignored.) Function The argument func is the user-supplied function that's called when one of the registered events occurs on fd. This function should return 0 (zero); other values are reserved. The function is passed the following arguments: - ctp - Context pointer. - fd - The fd on which the event occurred. - flags - The type of event that occurred. The possible flags are: - SELECT_FLAG_EXCEPT - SELECT_FLAG_READ - SELECT_FLAG_WRITE For descriptions of the flags passed to func, see Flags, above. - handle - The handle passed to select_attach(). Returns: Zero on success, or -1 if an error occurred (errno is set). Errors: - EINVAL - Invalid argument. - ENOMEM - Insufficient memory was available. Examples: For an example with select_attach(), see dispatch_create() . For other examples using the dispatch interface, see message_attach() , resmgr_attach() , and thread_pool_create() .
https://developer.blackberry.com/playbook/native/reference/com.qnx.doc.neutrino.lib_ref/topic/s/select_attach.html
CC-MAIN-2020-45
en
refinedweb
Tinting and Recoloring¶ Color and the Model Loader¶¶ If you wish, you can manually override the color attribute which has been specified by the model loader. nodePath.set_color(r, g, b, a); Again, this is an override. If the model already had vertex colors, these will disappear: the set_color()_color() is enough to override it. You can remove a previous set_color() using clear_color(). Tinting the Model¶ Sometimes, you don’t want to replace the existing color, sometimes, you want to tint the existing colors. For this, you need setColorScale: nodePath.set_color_scale(r, g, b, a); This color will be modulated (multiplied) with the existing color. You can remove a previous set_color_scale() using clear_color_scale(). Demonstration¶ To see the difference between set_color() and set_color_scale(),) base.run() This produces the following output: The model on the left is the original, unaltered model. Nik has used vertex colors throughout. The yellow of the belly, the black eyes, the red mouth, these are all vertex colors. The one in the middle has been setColor ed to a medium-blue color. As you can see, the setColor completely replaces the vertex colors. The one on the right bas been setColorScale ed to the same medium-blue color, but this only tints the model. A Note about Color Spaces¶ All colors that Panda3D expects are floating-point values between 0.0 and 1.0. Panda3D performs no correction or color space conversion before writing them into the framebuffer. This means that if you are using a linear workflow (ie. you are have set framebuffer-srgb in Config.prc or are using a post-processing filter that converts the rendered image to sRGB), all colors are specified in “linearized sRGB” instead of gamma-encoded sRGB. Applying a color obtained from a color picker is no longer as simple as dividing by 255! An easy way to correct existing colors when switching to a linear workflow is to apply a 2.2 gamma. This is a good approximation for the sRGB transform function: model1.set_color(powf(0.6, 2.2), powf(0.5, 2.2), powf(0.3, 2.2)); A better method is to use the sRGB conversion functions that Panda3D provides. For example, to apply the #51C2C6 color, you can do as follows: #include "convert_srgb.h" model1.set_color( decode_sRGB_float(0x51), decode_sRGB_float(0xC2), decode_sRGB_float(0xC6), ); If you are not using a linear workflow, or don’t know what that is, you don’t need to worry about this for now.
https://docs.panda3d.org/1.10/cpp/programming/render-attributes/tinting-and-recoloring
CC-MAIN-2020-45
en
refinedweb
passwordless_auth alternatives and similar packages Based on the "Authentication" category guardian10.0 5.7 passwordless_auth VS guardianAn authentication framework for use with Elixir applications. coherence9.8 0.9 passwordless_auth VS coherenceCoherence is a full featured, configurable authentication system for Phoenix. ueberauth9.7 2.7 passwordless_auth VS ueberauthAn Elixir Authentication System for Plug-based Web Applications. Pow9.7 8.4 passwordless_auth VS PowPow is a robust, modular, and extendable authentication and user management solution for Phoenix and Plug-based apps. oauth29.3 0.0 passwordless_auth VS oauth2An OAuth 2.0 client library for Elixir. Phauxth8.8 2.5 passwordless_auth VS PhauxthAuthentication library for Phoenix, and other Plug-based, web applications Shield8.3 1.2 passwordless_auth VS ShieldShield is an OAuth2 Provider hex package and also a standalone microservice build top of the Phoenix Framework and 'authable' package. goth8.0 1.5 passwordless_auth VS gothOAuth 2.0 library for server to server applications via Google Cloud APIs. PowAssent7.8 7.0 passwordless_auth VS PowAssentUse Google, Github, Twitter, Facebook, or add your custom strategy for authorization to your Pow enabled Phoenix app. ueberauth_googleA Google strategy for Überauth. basic_auth7.7 2.0 passwordless_auth VS basic_authElixir Plug to easily add HTTP basic authentication to an app. ueberauth_facebookFacebook OAuth2 Strategy for Überauth. samly7.2 0.0 passwordless_auth VS samlySAML SP SSO made easy (Doc). ueberauth_githubA GitHub strategy for Überauth. Veil7.1 0.9 passwordless_auth VS VeilSimple passwordless authentication for your Phoenix apps aws_auth6.7 0.0 passwordless_auth VS aws_authAWS Signature Version 4 Signing Library for Elixir. doorman6.7 0.0 passwordless_auth VS doormanTools to make Elixir authentication simple and flexible. ueberauth_identityA simple username/password strategy for Überauth. ueberauth_auth0An Ueberauth strategy for using Auth0 to authenticate your users. ueberauth_twitterTwitter Strategy for Überauth. oauther6.1 0.0 passwordless_auth VS oautherAn OAuth 1.0 implementation for Elixir. ueberauth_slackA Slack strategy for Überauth. oauth2ex5.8 0.0 passwordless_auth VS oauth2exAnother OAuth 2.0 client library for Elixir. Paseto5.4 4.7 passwordless_auth VS PasetoAn Elixir implementation of Paseto (Platform-Agnostic Security Tokens) Paddle5.1 2.4 passwordless_auth VS PaddleA library simplifying LDAP usage in Elixir projects elixir_auth_googleThe simplest way to add Google OAuth authentication ("Sign in with Google") to your Elixir/Phoenix app. blackbook4.7 0.0 passwordless_auth VS blackbookAll-in-one membership/authentication system for Elixir. ueberauth_vkvk.com Strategy for Überauth. aeacus4.4 0.0 passwordless_auth VS aeacusA simple configurable identity/password authentication module (Compatible with Ecto/Phoenix). ueberauth_microsoftA Microsoft strategy for Überauth. phoenix_client_sslClient SSL Authentication Plugs for Phoenix and other Plug-based apps. ueberauth_casCentral Authentication Service strategy for Überauth. ueberauth_active_directoryUberauth strategy for Active Directory authentication. ueberauth_weiboWeibo OAuth2 Strategy for Überauth. zachaeus2.3 6.1 passwordless_auth VS zachaeusAn easy to use licensing system, based on asymmetric cryptography. sigaws2.1 0.0 passwordless_auth VS sigawsAWS Signature V4 signing and verification library (Doc). mojoauth2.1 0.0 passwordless_auth VS mojoauthMojoAuth implementation in Elixir. sesamex2.1 0.0 passwordless_auth VS sesamexAnother simple and flexible authentication solution in 5 minutes!. htpasswd1.8 0.0 passwordless_auth VS htpasswdApache httpasswd file reader/writer in Elixir. exBankID1.8 7.1 passwordless_auth VS exBankIDexBankID is a simple stateless API-client for the Swedish BankID API github_oauthA simple github oauth library. oauth2_githubA GitHub OAuth2 Provider for Elixir. oauth2_facebookA Facebook OAuth2 Provider for Elixir. ueberauth_lineLINE Strategy for Überauth. apache_passwd_md5Apache/APR Style Password Hashing. oauth2cli0.6 0.0 passwordless_auth VS oauth2cliSimple OAuth2 client written for Elixir. ueberauth_foursquareFoursquare OAuth2 Strategy for Überauth. Scout APM: Application Performance Monitoring Do you think we are missing an alternative of passwordless_auth or a related project? README PasswordlessAuth PasswordlessAuth provides functionality that can be used in an authentication or verification system, such as a passwordless or multi-factor authentication flow, or for verifying a user's ownership of a phone number, email address or any other identifying address. - Generate verification codes - Verify a user's attempt at entering a code - Rate limit attempts - Expire codes This library doesn't deal with sending the codes to recipients. See Usage for example usage. Documentation Documentation is available at Installation Add :passwordless_auth to your list of dependencies in mix.exs: def deps do [ {:passwordless_auth, "~> 0.3.0"} ] end Configuration The following PasswordlessAuth config can be set in your config/config.exs file: config :passwordless_auth, # How long codes are valid for verification_code_ttl: 300, # seconds; default: 300 # Rate limiting: how many failed attempts are allowed before the timeout is applied num_attempts_before_timeout: 5, # default: 5 # Rate limiting: how long to disallow attempts after the limit has been reached rate_limit_timeout_length: 60, # seconds; default: 60 # How often to clear out expired codes garbage_collector_frequency: 30 # seconds; default: 30 Usage Here's an example where the code is sent to a recipient's phone number using ExTwilio. 1. Generate a verification code for the recipient User enters their phone number to request a verification code. code = PasswordlessAuth.generate_code("+447123456789") => "123456" 2. Send the code to the recipient This library doesn't deal with SMS or emails, so this bit is up to you. ExTwilio.Message.create(%{ to: "+447123456789", body: "Your code is #{code}" }) 3. Verify the code Recipient receives a text message with their verification code. They enter it into your system and you verify that it is correct. attempt_code = "123456" # The user's attempt at entering the correct verification code. PasswordlessAuth.verify_code( "+447123456789", attempt_code ) Returns true or false. Once a code has been verified, you can remove it so that it can't be used again before it expires. PasswordlessAuth.remove_code("+447123456789")
https://elixir.libhunt.com/passwordless_auth-alternatives
CC-MAIN-2020-45
en
refinedweb
the configuration into the base class constructor - Perform asynchronous startup functions before starting the application - Perform graceful cleanup functions when the application stops src/widget.application.ts import {Application} from '@loopback/core'; import {RestComponent} from '@loopback/rest'; import {UserController, ShoppingCartController} from './controllers'; export class WidgetApplication extends Application { constructor() { // This is where you would pass configuration to the base constructor // (as well as handle your own!) super({ rest: { port: 8080, }, }); const app = this; // For clarity. // You can bind to the Application-level context here. // app.bind('foo').to(bar); app.component(RestComponent); app.controller(UserController); app.controller(ShoppingCartController); }. Binding. Components app.component(MyComponent); app.component(RestComponent); The component function allows binding of component constructors within your Application instance’s context. For more information on how to make use of components, see Using Components. Controllers app.controller(FooController); app.controller(BarController); Much like the component function, the controller function allows binding of Controllers to the Application context. Servers app.server(RestServer); app.servers([MyServer, GrpcServer]); The server function is much like the previous functions, but bulk bindings are possible with Servers through the function servers. const app = new Application(); app.server(RestServer, 'public'); // {'public': RestServer} app.server(RestServer, 'private'); // {'private': RestServer} In the above example, the two server instances would be bound to the Application context under the keys servers.public and servers.private, respectively. Constructor configuration The Application class constructor also accepts an ApplicationConfig object which contains component-level configurations such as RestServerConfig. It will automatically create bindings for these configurations and later be injected through dependency injections. Visit Dependency Injection for more information. Note: Binding configuration such as component binding, provider binding, or binding scopes are not possible with the constructor-based configuration approach. export class MyApplication extends RestApplication { constructor() { super({ rest: { port: 4000, host: 'my-host', }, }); } } Tips for application setup Here are some tips for application setup to help avoid common pitfalls and mistakes. Extend from RestApplication when using RestServer If you want to use RestServer from the @loopback/rest package, we recommend extending RestApplication in your app instead of manually binding RestServer or RestComponent. RestApplication already uses RestComponent and makes useful functions in RestServer like handler() available at the app level. This means you can call the RestServer functions to perform all of your server-level setups in the app constructor without having to explicitly retrieve an instance of your server. Serve static files The RestServer allows static files to be served. It can be set up by calling the static() API. app.static('/html', rootDirForHtml); or server.static(['/html', '/public'], rootDirForHtml); Static assets are not allowed to be mounted on / to avoid performance penalty as / matches all paths and incurs file system access for each HTTP request. The static() API delegates to serve-static to serve static files. Please see and for details. WARNING: The static assets are served before LoopBack sequence of actions. If an error is thrown, the rejectaction will NOT be triggered..
https://loopback.io/doc/en/lb4/Application.html
CC-MAIN-2020-45
en
refinedweb
XUL::Gui - render cross platform gui applications with firefox from perl version 0.63 this module is under active development, interfaces may change. this code is currently in beta, use in production environments at your own risk use XUL::Gui; display Label 'hello, world!'; # short enough? remove "Label" for bonus points'), ); this module exposes the entire functionality of mozilla firefox's rendering engine to perl by providing all of the XUL and HTML tags as functions and allowing you to interact with those objects directly from perl. gui applications created with this toolkit are cross platform, fully support CSS styling, inherit firefox's rich assortment of web technologies (browser, canvas and video tags, flash and other plugins), and are even easier to write than HTML . gui's created with this module are event driven. an arbitrarily complex (and runtime mutable) object tree is passed to display , which then creates the gui in firefox and starts the event loop. display will wait for and respond to events until the quit function is called, or the user closes the window. all of javascript's event handlers are available, and can be written in perl (normally) or javascript (for handlers that need to be very fast such as image rollovers with onmouseover or the like). this is not to say that perl side handlers are slow, but with rollovers and fast mouse movements, sometimes there is mild lag due to protocol overhead. this module is written in pure perl, and only depends upon core modules, making it easy to distribute your application. the goal of this module is to make all steps of gui development as easy as possible. XUL's widgets and nested design structure gets us most of the way there, and this module with its light weight syntax, and 'do what i mean' nature hopefully finishes the job. everything has sensible defaults with minimal boilerplate, and nested design means a logical code flow that isn't littered with variables. please send feedback if you think anything could be improved. just like in HTML, you build up your gui using tags. all tags ( XUL tags, HTML tags, user defined widgets, and the display function) are parsed the same way, and can fit into one of four templates: HR() <hr /> B('some bold text') <b>some bold text<b/> in the special case of a tag with one argument, which is not another tag, that argument is added to that tag as a text node. this is mostly useful for HTML tags, but works with XUL as well. once parsed, the line B('...') becomes B( TEXT => '...' ). the special TEXT attribute can be used directly if other attributes need to be set: FONT( color=>'blue', TEXT=>'...' ). Label( value=>'some text', style=>'color: red' ) <label value="some text" style="color: red;" /> Hbox( id => 'mybox', pack => 'center', Label( value => 'hello' ), BR, B('world') ) <hbox id="mybox" pack="center"> <label value="hello" /> <br /> <b>world</b> </hbox> as you can see, the tag functions in perl nest and behave the same way as their counterpart element constructors in HTML/XUL . just like in HTML , you access the elements in your gui by id . but rather than using document.getElementById(...) all the time, setting the id attribute names an element in the global %ID hash. the same hash can be accessed using the ID(some_id) function. my $object = Button( id => 'btn', label => 'OK' ); # $ID{btn} == ID(btn) == $object the ID hash also exists in javascript: ID.btn == document.getElementById('btn') due to the way this module works, every element needs an id , so if you don't set one yourself, an auto generated id matching /^xul_\d+$/ is used. you can use any id that matches /\w+/ Tk's attribute style with a leading dash is supported. this is useful for readability when collapsing attribute lists with qw// TextBox id=>'txt', width=>75, height=>20, type=>'number', decimalplaces=>4; TextBox qw/-id txt -width 75 -height 20 -type number -decimalplaces 4/; multiple 'style' attributes are joined with ';' into a single attribute all XUL and HTML objects in perl are exact mirrors of their javascript counterparts and can be acted on as such. for anything not written in this document or XUL::Gui::Manual, developer.mozilla.com is the official source of documentation: any tag attribute name that matches /^on/ is an event handler (onclick, onfocus, ...), and expects a sub {...} (perl event handler) or function q{...} (javascript event handler). perl event handlers get passed a reference to their object and an event object Button( label=>'click me', oncommand=> sub { my ($self, $event) = @_; $self->label = $event->type; }) in the event handler, $_ == $_[0] so a shorter version would be: oncommand => sub {$_->label = pop->type} javascript event handlers have event and this set for you Button( label=>'click me', oncommand=> function q{ this.label = event.type; }) any attribute with a name that doesn't match /^on/ that has a code ref value is added to the object as a method. methods are explained in more detail later on. use XUL::Gui; # is the same as use XUL::Gui qw/:base :util :pragma :xul :html :const :image/; the following export tags are available: :base %ID ID alert display quit widget :tools function gui interval serve timeout toggle XUL :pragma buffered cached delay doevents flush noevents now :const BLUR FILL FIT FLEX MIDDLE SCROLL :widgets ComboBox filepicker prompt :image bitmap bitmap2src :util apply mapn trace zip :internal genid object realid tag :all (all exports) :default (same as with 'use XUL::Gui;') :xul (also exported as Titlecase) Action ArrowScrollBox Assign BBox Binding Bindings Box Broadcaster BroadcasterSet Browser Button Caption CheckBox ColorPicker Column Columns Command CommandSet Conditions Content DatePicker Deck Description Dialog DialogHeader DropMarker Editor Grid Grippy GroupBox HBox IFrame Image Key KeySet Label ListBox ListCell ListCol ListCols ListHead ListHeader ListItem Member Menu MenuBar MenuItem MenuList MenuPopup MenuSeparator Notification NotificationBox Observes Overlay Page Panel Param PopupSet PrefPane PrefWindow Preference Preferences ProgressMeter Query QuerySet Radio RadioGroup Resizer RichListBox RichListItem Row Rows Rule Scale Script ScrollBar ScrollBox ScrollCorner Separator Spacer SpinButtons Splitter Stack StatusBar StatusBarPanel StringBundle StringBundleSet Tab TabBox TabPanel TabPanels Tabs Template TextBox TextNode TimePicker TitleBar ToolBar ToolBarButton ToolBarGrippy ToolBarItem ToolBarPalette ToolBarSeparator ToolBarSet ToolBarSpacer ToolBarSpring ToolBox ToolTip Tree TreeCell TreeChildren TreeCol TreeCols TreeItem TreeRow TreeSeparator Triple VBox Where Window Wizard WizardPage :html (also exported as html_lowercase) A ABBR ACRONYM ADDRESS APPLET AREA AUDIO B BASE BASEFONT BDO BGSOUND BIG BLINK BLOCKQUOTE BODY BR BUTTON CANVAS CAPTION CENTER CITE CODE COL COLGROUP COMMENT DD DEL DFN DIR DIV DL DT EM EMBED FIELDSET FONT FORM FRAME FRAMESET H1 H2 H3 H4 H5 H6 HEAD HR HTML I IFRAME ILAYER IMG INPUT INS ISINDEX KBD LABEL LAYER LEGEND LI LINK LISTING MAP MARQUEE MENU META MULTICOL NOBR NOEMBED NOFRAMES NOLAYER NOSCRIPT OBJECT OL OPTGROUP OPTION P PARAM PLAINTEXT PRE Q RB RBC RP RT RTC RUBY S SAMP SCRIPT SELECT SMALL SOURCE SPACER SPAN STRIKE STRONG STYLE SUB SUP TABLE TBODY TD TEXTAREA TFOOT TH THEAD TITLE TR TT U UL VAR VIDEO WBR XML XMP constants: FLEX flex => 1 FILL flex => 1, align =>'stretch' FIT sizeToContent => 1 SCROLL style => 'overflow: auto' MIDDLE align => 'center', pack => 'center' BLUR onfocus => 'this.blur()' each is a function that returns its constant, prepended to its arguments, thus the following are both valid: Box FILL pack=>'end'; Box FILL, pack=>'end'; if you prefer an OO interface, there are a few ways to get one: use XUL::Gui 'g->*'; # DYOI: draw your own interface g (which could be any empty package name) now has all of XUL::Gui's functions as methods. since draw your own interface does what you mean ( dyoidwym ), each of the following graphic styles are equivalent: g->*, g->, ->g, install_into->g. normally, installing methods into an existing package will cause a fatal error, however you can add ! to force installation into an existing package no functions are imported into your namespace by default, but you can request any you do want as usual: use XUL::Gui qw( g->* :base :pragma ); to use the OO interface: g->display( g->Label('hello world') ); # is the same as XUL::Gui::display( XUL::Gui::Label('hello world') ); use g->id('someid') or g->ID('someid') to access the %ID hash the XUL tags are also available in lc and lcfirst: g->label == XUI::Gui::Label g->colorpicker == XUL::Gui::ColorPicker g->colorPicker == XUL::Gui::ColorPicker the HTML tags are also available in lc, unless an XUL tag of the same name exists if you prefer an object (which behaves exactly the same as the package 'g'): use XUL::Gui (); # or anything you do want my $g = XUL::Gui->oo; # $g now has XUL::Gui's functions as methods if you like all the OO lowercase names, but want functions, draw that: use XUL::Gui qw( ->main:: ); # ->:: will also export to main:: # '::' implies '!' display label 'hello, world'; display LIST display starts the http server, launches firefox, and waits for events. it takes a list of gui objects, and several optional parameters: debug (0) .. 6 adjust verbosity to stderr silent (0) 1 disables all stderr status messages trusted 0 (1) starts firefox with '-app' (requires firefox 3+) launch 0 (1) launches firefox, if 0 connect to skin 0 (1) use the default 'chrome://global/skin' skin chrome 0 (1) chrome mode disables all normal firefox gui elements, setting this to 0 will turn those elements back on. xml (0) 1 returns the object tree as xml, the gui is not launched perl includes deparsed perl event handlers delay milliseconds delays each gui update cycle (for debugging) port first port to start the server on, port++ after that otherwise a random 5 digit port is used mozilla 0 (1) setting this to 0 disables all mozilla specific features including all XUL tags, the filepicker, and any trusted mode features. (used to implement Web::Gui) if the first object is a Window , that window is created, otherwise a default one is added. the remaining objects are then added to the window. display will not return until the the gui quits see SYNOPSIS , XUL::Gui::Manual, XUL::Gui::Tutorial, and the examples folder in this distribution for more details quit shuts down the server (causes a call to display to return at the end of the current event cycle) quit will shut down the server, but it can only shut down the client in trusted mode. serve PATH MIMETYPE DATA add a virtual file to the server serve '/myfile.jpg', 'text/jpeg', $jpegdata; the paths qw( / /client.js /event /ping /exit /perl ) are reserved object TAGNAME LIST creates a gui proxy object, allows run time addition of custom tags object('Label', value=>'hello') is the same as Label( value=>'hello' ) the object function is the constructor of all proxied gui objects, and all these objects inherit from [object] which provides the following methods. objects and widgets inherit from a base class [object] that provides the following object inspection / extension methods. these methods operate on the current data that XUL::Gui is holding in perl, none of them will ever call out to the gui ->has('item!') returns attributes or methods (see widget for details) ->attr('rows') lvalue access to $$self{A} attributes ->child(2) lvalue access to $$self{C} children it only makes sense to use attr or child to set values on objects before they are written to the gui ->can('update') lvalue access to $$self{M} methods ->attributes returns %{ $$self{A} } ->children returns @{ $$self{C} } ->methods returns %{ $$self{M} } ->widget returns $$self{W} ->id returns $$self{ID} ->parent returns $$self{P} ->super returns $$self{ISA}[0] ->super(2) returns $$self{ISA}[2] ->extends(...) sets inheritance (see widget for details) these methods are always available for widgets, and if they end up getting in the way of any javascript methods you want to call for gui objects: $object->extends(...) # calls the perl introspection function $object->extends_(...) # calls 'object.extends(...)' in the gui $x = $object->_extends; # fetches the 'object.extends' property $object->setAtribute('extends', ...); # and setting an attribute or at runtime: local $XUL::Gui::EXTENDED_OBJECTS = 0; # which prevents object inheritance # in the current lexical scope $object->extends(...); # calls the real javascript 'extends' method assuming that it exists tag NAME returns a code ref that generates proxy objects, allows for user defined tag functions *mylabel = tag 'label'; \&mylabel == \&Label ID OBJECTID returns the gui object with the id OBJECTID . it is exactly the same as $ID{OBJECTID} and has (*) glob context so you don't need to quote the id. Label( id => 'myid' ) ... $ID{myid}->value = 5; ID(myid)->value = 5; # same widget {CODE} HASH widgets are containers used to group tags together into common patterns. in addition to grouping, widgets can have methods, attached data, and can inherit from other widgets *MyWidget = widget { Hbox( Label( $_->has('label->value') ), Button( label => 'OK', $_->has('oncommand') ), $_->children ) } method => sub{ ... }, method2 => sub{ ... }, some_data => [ ... ]; # unless the value is a CODE ref, each widget # instance gets a new deep copy of the data $ID{someobject}->appendChild( MyWidget( label=>'widget', oncommand=>\&event_handler ) ); inside the widget's code block, several variables are defined: variable contains the passed in $_{A} = { attributes } $_{C} = [ children ] $_{M} = { methods } $_ = a reference to the current widget (also as $_{W}) @_ = the unchanged runtime argument list widgets have the following predefined (and overridable) methods that are synonyms / syntactic sugar for the widget variables: $_->has('label') ~~ exists $_{A}{label} ? (label=>$_{A}{label}) : () $_->has('label->value') ~~ exists $_{A}{label} ? (value=>$_{A}{label}) : () $_->has('!label !command->oncommand style') ->has(...) splits its arguments on whitespace and will search $_{A}, then $_{M} for the attribute. if an ! is attached (anywhere) to an attribute, it is required, and the widget will croak without it. in scalar context, if only one key => value pair is found, ->has() will return the value. otherwise, the number of found pairs is returned $_->attr( STRING ) $_{A}{STRING} # lvalue $_->attributes %{ $_{A} } $_->child( NUMBER ) $_{C}[NUMBER] # lvalue $_->children @{ $_{C} } $_->can( STRING ) $_{M}{STRING} # lvalue $_->methods %{ $_{M} } most everything that you would want to access is available as a method of the widget (attributes, children, instance data, methods). since there may be namespace collisions, here is the namespace construction order: %widget_methods = ( passed in attributes, predefined widget methods, widget methods and instance data, passed in methods ); widgets can inherit from other widgets using the ->extends() method: *MySubWidget = widget {$_->extends( &MyWidget )} submethod => sub {...}; more detail in XUL::Gui::Manual alert STRING open an alert message box prompt STRING open an prompt message box filepicker MODE FILTER_PAIRS opens a filepicker dialog. modes are 'open', 'dir', or 'save'. returns the path or undef on failure. if mode is 'open' and filepicker is called in list context, the picker can select multiple files. the filepicker is only available when the gui is running in 'trusted' mode. my @files = filepicker open => Text => '*.txt; *.rtf', Images => '*.jpg; *.gif; *.png'; trace LIST carps LIST with object details, and then returns LIST unchanged function JAVASCRIPT create a javascript event handler, useful for mouse events that need to be very fast, such as onmousemove or onmouseover Button( label=>'click me', oncommand=> function q{ this.label = 'ouch'; alert('hello from javascript'); if (some_condition) { perl("print 'hello from perl'"); } }) $ID{myid} in perl is ID.myid in javascript to access widget siblings by id, wrap the id with W{...} interval {CODE} TIME LIST perl interface to javascript's setInterval() . interval returns a code ref which when called will cancel the interval. TIME is in milliseconds. @_ will be set to LIST when the code block is executed. timeout {CODE} TIME LIST perl interface to javascript's setTimeout() . timeout returns a code ref which when called will cancel the timeout. TIME is in milliseconds. @_ will be set to LIST when the code block is executed. XUL STRING converts an XML XUL string to XUL::Gui objects. experimental. this function is provided to facilitate drag and drop of XML based XUL from tutorials for testing. the perl functional syntax for tags should be used in all other cases gui JAVASCRIPT executes JAVASCRIPT in the gui, returns the result passing a reference to a scalar or coderef as a value in an object constructor will create a data binding between the perl variable and its corresponding value in the gui.use XUL::Gui; my $title = 'initial title'; display Window title => \$title, Button( label => 'update title', oncommand => sub { $title = 'title updated via data binding'; } ); a property on a previously declared object can also be bound by taking a reference to it:display Label( id => 'lbl', value => 'initial value'), Button( label => 'update', oncommand => sub { my $label = \ID(lbl)->value; $$label = 'new value'; } ) this is just an application of the normal bidirectional behavior of gui accessors:for (ID(lbl)->value) { print "$_\n"; # gets the current value from the gui $_ = 'new'; # sets the value in the gui print "$_\n"; # gets the value from the gui again } the following functions all apply pragmas to their CODE blocks. in some cases, they also take a list. this list will be @_ when the CODE block executes. this is useful for sending in values from the gui, if you don't want to use a now {block} this module will automatically buffer certain actions within event handlers. autobuffering will queue setting of values in the gui until there is a get, the event handler ends, or doevents is called. this eliminates the need for many common applications of the buffered pragma. flush flush the autobuffer buffered {CODE} LIST delays sending all messages to the gui. partially deprecated (see autobuffering) buffered { $ID{$_}->value = '' for qw/a bunch of labels/ }; # all labels are cleared at once cached {CODE} turns on caching of gets from the gui now {CODE} execute immediately, from inside a buffered or cached block, without causing a buffer flush or cache reset. buffered and cached will not work inside a now block. delay {CODE} LIST delays executing its CODE until the next gui refresh useful for triggering widget initialization code that needs to run after the gui objects are rendered. the first element of LIST will be in $_ when the code block is executed noevents {CODE} LIST disable event handling doevents force a gui update cycle before an event handler finishes mapn {CODE} NUMBER LIST map over n elements at a time in @_ with $_ == $_[0] print mapn {$_ % 2 ? "@_" : " [@_] "} 3 => 1..20; > 1 2 3 [4 5 6] 7 8 9 [10 11 12] 13 14 15 [16 17 18] 19 20 zip LIST of ARRAYREF %hash = zip [qw/a b c/], [1..3]; apply {CODE} LIST apply a function to a copy of LIST and return the copy print join ", " => apply {s/$/ one/} "this", "and that"; > this one, and that one toggle TARGET OPT1 OPT2 alternate a variable between two states toggle $state; # opts default to 0, 1 toggle $state => 'red', 'blue'; bitmap WIDTH HEIGHT OCTETS returns a binary .bmp bitmap image. OCTETS is a list of BGR values bitmap 2, 2, qw(255 0 0 255 0 0 255 0 0 255 0 0); # 2px blue square for efficiency, rather than a list of OCTETS , you can send in a single array reference. each element of the array reference can either be an array reference of octets, or a packed string pack "C*" => OCTETS bitmap2src WIDTH HEIGHT OCTETS returns a packaged bitmap image that can be directly assigned to an image tag's src attribute. arguments are the same as bitmap() $ID{myimage}->src = bitmap2src 320, 180, @image_data; # access attributes and properties $object->value = 5; # sets the value in the gui print $object->value; # gets the value from the gui # the attribute is set if it exists, otherwise the property is set $object->_value = 7; # sets the property directly # method calls $object->focus; # void context or $object->appendChild( H2('title') ); # any arguments are always methods print $object->someAccessorMethod_; # append _ to force interpretation # as a JS method call in addition to mirroring all of an object's existing javascript methods / attributes / and properties to perl (with identical spelling / capitalization), several default methods have been added to all objects ->removeChildren( LIST ) removes the children in LIST , or all children if none are given ->removeItems( LIST ) removes the items in LIST , or all items if none are given ->appendChildren( LIST ) appends the children in LIST ->prependChild( CHILD, [INDEX] ) inserts CHILD at INDEX (defaults to 0) in the parent's child list ->replaceChildren( LIST ) removes all children, then appends LIST ->appendItems( LIST ) append a list of items ->replaceItems( LIST ) removes all items, then appends LIST create dropdown list boxes items => [ ['displayed label' => 'value'], 'label is same as value' ... ] default => 'item selected if this matches its value' also takes: label, oncommand, editable, flex styles: liststyle, popupstyle, itemstyle getter: value too many changes to count. if anything is broken, please send in a bug report. some options for display have been reworked from 0.36 to remove double negatives widgets have changed quite a bit from version 0.36. they are the same under the covers, but the external interface is cleaner. for the most part, the following substitutions are all you need: $W --> $_ or $_{W} $A{...} --> $_{A}{...} or $_->attr(...) $C[...] --> $_{C}[...] or $_->child(...) $M{...} --> $_{M}{...} or $_->can(...) attribute 'label onclick' --> $_->has('label onclick') widget {extends ...} --> widget {$_->extends(...)} export tags were changed a little bit from 0.36 thread safety should be better than in 0.36 currently it is not possible to open more than one window, hopefully this will be fixed soon the code that attempts to find firefox may not work in all cases, patches welcome for the TextBox object, the behaviors of the "value" and "_value" methods are reversed. it works better that way and is more consistent with the behavior of other tags. Eric Strom, <asg at cpan.org> please report any bugs or feature requests to bug-xul-gui at rt.cpan.org , or through the web interface at. I will be notified, and then you'll automatically be notified of progress on your bug as I make changes. the mozilla development team this program is free software; you can redistribute it and/or modify it under the terms of either: the GNU General Public License as published by the Free Software Foundation; or the Artistic License. see for more information.
http://search.cpan.org/~asg/XUL-Gui-0.63/lib/XUL/Gui.pm
CC-MAIN-2018-09
en
refinedweb
Distributed Java Programming with RMI and CORBA Distributed Java Programming with RMI and CORBA Qusay H. Mahmoud January 2002 The Java Remote Method Invocation (RMI) mechanism and the Common Object Request Broker Architecture (CORBA) are the two most important and widely used distributed object systems. Each system has its own features and shortcomings. Both are being used in the industry for various applications ranging from e-commerce to health care. Selecting which of these two distribution mechanisms to use for a project is a tough task. This article presents an overview of RMI and CORBA, and more importantly it shows how to develop a useful application for downloading files from remote hosts. It then: - Presents a brief overview of distributed object systems - Provides a brief overview of RMI and CORBA - Gives you a flavor of the effort involved in developing applications in RMI and CORBA - Shows how to transfer files from remote machines using RMI and CORBA - Provides a brief comparison of RMI and CORBA The Client/Server Model The client/server model is a form of distributed computing in which one program (the client) communicates with another program (the server) for the purpose of exchanging information. In this model, both the client and server usually speak the same language -- a protocol that both the client and server understand -- so they are able to communicate. While the client/server model can be implemented in various ways, it is typically done using low-level sockets. Using sockets to develop client/server systems means that we must design a protocol, which is a set of commands agreed upon by the client and server through which they will be able to communicate. As an example, consider the HTTP protocol that provides a method called GET, which must be implemented by all web servers and used by web clients (browsers) in order to retrieve documents. The Distributed Objects Model A distributed object-based system is a collection of objects that isolates the requesters of services (clients) from the providers of services (servers) by a well-defined encapsulating interface. In other words, clients are isolated from the implementation of services as data representations and executable code. This is one of the main differences that distinguishes the distributed object-based model from the pure client/server model. In the distributed object-based model, a client sends a message to an object, which in turns interprets the message to decide what service to perform. This service, or method, selection could be performed by either the object or a broker. The Java Remote Method Invocation (RMI) and the Common Object Request Broker Architecture (CORBA) are examples of this model. RMI RMI is a distributed object system that enables you to easily develop distributed Java applications. Developing distributed applications in RMI is simpler than developing with sockets since there is no need to design a protocol, which is an error-prone task. In RMI, the developer has the illusion of calling a local method from a local class file, when in fact the arguments are shipped to the remote target and interpreted, and the results are sent back to the callers. The Genesis of an RMI Application Developing a distributed application using RMI involves the following steps: - Define a remote interface - Implement the remote interface - Develop the server - Develop a client - Generate Stubs and Skeletons, start the RMI registry, server, and client We will now examine these steps through the development of a file transfer application. Example: File Transfer Application This application allows a client to transfer (or download) any type of file (plain text or binary) from a remote machine. The first step is to define a remote interface that specifies the signatures of the methods to be provided by the server and invoked by clients. Define a remote interface The remote interface for the file download application is shown in Code Sample 1. The interface FileInterface provides one method downloadFile that takes a String argument (the name of the file) and returns the data of the file as an array of bytes. Code Sample 1: FileInterface.java import java.rmi.Remote; import java.rmi.RemoteException; public interface FileInterface extends Remote { public byte[] downloadFile(String fileName) throws RemoteException; } Note the following characteristics about the FileInterface: - It must be declared public, in order for clients to be able to load remote objects which implement the remote interface. - It must extend the Remoteinterface, to fulfill the requirement for making the object a remote one. - Each method in the interface must throw a java.rmi.RemoteException. Implement the remote interface The next step is to implement the interface FileInterface. A sample implementation is shown in Code Sample 2. Note that in addition to implementing the FileInterface, the FileImpl class is extending the UnicastRemoteObject. This indicates that the FileImpl class is used to create a single, non-replicated, remote object that uses RMI's default TCP-based transport for communication.: - Create an instance of the RMISecurityManagerand install it - Create an instance of the remote object ( FileImplin this case) - Register the object created with the RMI registry. A sample implementation is shown in Code Sample 3.(); } } } The statement Naming.rebind("//127.0.0.1/FileServer", fi) assumes that the RMI registry is running on the default port number, which is 1099. However, if you run the RMI registry on a different port number it must be specified in that statement. For example, if the RMI registry is running on port 4500, then the statement becomes: Naming.rebind("//127.0.0.1:4500/FileServer", fi) Also, it is important to note here that we assume the rmi registry and the server will be running on the same machine. If they are not, then simply change the address in the rebind method. Develop a client The next step is to develop a client. The client remotely invokes any methods specified in the remote interface ( FileInterface). To do so however, the client must first obtain a reference to the remote object from the RMI registry. Once a reference is obtained, the downloadFile method is invoked. A client implementation is shown in Code Sample 4. In this implementation, the client accepts two arguments at the command line: the first one is the name of the file to be downloaded and the second one is the address of the machine from which the file is to be downloaded, which is the machine that is running the file server. Code Sample 4: FileClient.java). Finally, it is time to start the RMI registry and run the server and client. To start the RMI registry on the default port number, use the command rmiregistry or start rmiregistry on Windows. To start the RMI registry on a different port number, provide the port number as an argument to the RMI registry: prompt> rmiregistry portNumber Once the RMI registry is running, you can start the server FileServer. However, since the RMI security manager is being used in the server application, you need a security policy to go with it. Here is a sample security policy: grant { permission java.security.AllPermission "", ""; }; Note: this is just a sample policy. It allows anyone to do anything. For your mission critical applications, you need to specify more constraint security policies. Now, in order to start the server you need a copy of all the classes (including stubs and skeletons) except the client class ( FileClient.class). To start the server use the following command, assuming that the security policy is in a file named policy.txt: prompt> java -Djava.security.policy=policy.txt FileServer To start the client on a different machine, you need a copy of the remote interface ( FileInterface.class) and stub ( FileImpl_Stub.class). To start the client use the command: prompt> java FileClient fileName machineName where fileName is the file to be downloaded and machineName is the machine where the file is located (the same machine runs the file server). If everything goes ok then the client exists and the file downloaded is on the local machine. To run the client we mentioned that you need a copy of the interface and stub. A more appropriate way to do this is to use RMI dynamic class loading. The idea is you do not need copies of the interface and the stub. Instead, they can be located in a shared directory for the server and the client, and whenever a stub or a skeleton is needed, it is downloaded automatically by the RMI class loader. To do this you run the client, for example, using the following command: java -Djava.rmi.server.codebase= FileClient fileName machineName. For more information on this, please see Dynamic Code Loading using RMI. CORBA The Common Object Request Broker Architecture (or CORBA) is an industry standard developed by the Object Management Group (OMG) to aid in distributed objects programming. It is important to note that CORBA is simply a specification. A CORBA implementation is known as an ORB (or Object Request Broker). There are several CORBA implementations available on the market such as VisiBroker, ORBIX, and others. JavaIDL is another implementation that comes as a core package with the JDK1.3 or above. CORBA was designed to be platform and language independent. Therefore, CORBA objects can run on any platform, located anywhere on the network, and can be written in any language that has Interface Definition Language (IDL) mappings. Similar to RMI, CORBA objects are specified with interfaces. Interfaces in CORBA, however, are specified in IDL. While IDL is similar to C++, it is important to note that IDL is not a programming language. For a detailed introduction to CORBA, please see Distributed Programming with Java: Chapter 11 (Overview of CORBA). The Genesis of a CORBA Application There are a number of steps involved in developing CORBA applications. These are: - Define an interface in IDL - Map the IDL interface to Java (done automatically) - Implement the interface - Develop the server - Develop a client - Run the naming service, the server, and the client. We now explain each step by walking you through the development of a CORBA-based file transfer application, which is similar to the RMI application we developed earlier in this article. Here we will be using the JavaIDL, which is a core package of JDK1.3+. Define the Interface When defining a CORBA interface, think about the type of operations that the server will support. In the file transfer application, the client will invoke a method to download a file. Code Sample 5 shows the interface for FileInterface. Data is a new type introduced using the typedef keyword. A sequence in IDL is similar to an array except that a sequence does not have a fixed size. An octet is an 8-bit quantity that is equivalent to the Java type byte. Note that the downloadFile method takes one parameter of type string that is declared in. IDL defines three parameter-passing modes: in (for input from client to server), out (for output from server to client), and inout (used for both input and output). Code Sample 5: FileInterface.idl interface FileInterface { typedef sequence<octet> Data; Data downloadFile(in string fileName); }; Once you finish defining the IDL interface, you are ready to compile it. The JDK1.3+ comes with the idlj compiler, which is used to map IDL definitions into Java declarations and statements. The idlj compiler accepts options that allow you to specify if you wish to generate client stubs, server skeletons, or both. The -f<side> option is used to specify what to generate. The side can be client, server, or all for client stubs and server skeletons. In this example, since the application will be running on two separate machines, the -fserver option is used on the server side, and the -fclient option is used on the client side.. Implement the interface Now, we provide an implementation to the downloadFile method. This implementation is known as a servant, and as you can see from Code Sample 6, the class FileServant extends the _FileInterfaceImplBase class to specify that this servant is a CORBA object.); } } Develop the server The next step is developing the CORBA server. The FileServer class, shown in Code Sample 7, implements a CORBA server that does the following: - Initializes the ORB - Creates a FileServant object - Registers the object in the CORBA Naming Service (COS Naming) - Prints a status message - Waits for incoming client requests); } } } Once the FileServer has an ORB, it can register the CORBA service. It uses the COS Naming Service specified by OMG and implemented by Java IDL to do the registration. It starts by getting a reference to the root of the naming service. This returns a generic CORBA object. To use it as a NamingContext object, it must be narrowed down (in other words, casted) to its proper type, and this is done using the statement: NamingContext ncRef = NamingContextHelper.narrow(objRef); The ncRef object is now an org.omg.CosNaming.NamingContext. You can use it to register a CORBA service with the naming service using the rebind method. Develop a client The next step is to develop a client. An implementation is shown in Code Sample 8. Once a reference to the naming service has been obtained, it can be used to access the naming service and find other services (for example the FileTransfer service). When the FileTransfer service is found, the downloadFile method is invoked.: - Running the the CORBA naming service. This can be done using the command tnameserv. By default, it runs on port 900. If you cannot run the naming service on this port, then you can start it on another port. To start it on port 2500, for example, use the following command: prompt> tnameserv -ORBinitialPort 2500 - Start the server. This can be done as follows, assuming that the naming service is running on the default port number: prompt> java FileServer If the naming service is running on a different port number, say 2500, then you need to specify the port using the ORBInitialPortoption as follows: prompt> java FileServer -ORBInitialPort 2500 - Generate Stubs for the client. Before we can run the client, we need to generate stubs for the client. To do that, get a copy of the FileInterface.idlfile and compile it using the idlj compiler specifying that you wish to generate client-side stubs, as follows: prompt> idlj -fclient FileInterface.idl - Run the client. Now you can run the client using the following command, assuming that the naming service is running on port 2500. prompt> java FileClient hello.txt -ORBInitialPort 2500 Where hello.txt is the file we wish to download from the server. Note: if the naming service is running on a different host, then use the -ORBInitialHostoption to specify where it is running. For example, if the naming service is running on port number 4500 on a host with the name gosling, then you start the client as follows: prompt> java FileClient hello.txt -ORBInitialHost gosling -ORBInitialPort 4500 Alternatively, these options can be specified at the code level using properties. So instead of initializing the ORB as: ORB orb = ORB.init(argv, null); It can be initialized specifying that the CORBA server machine (called gosling) and the naming service's port number (to be 2500) as follows: Properties props = new Properties(); props.put("org.omg.CORBA.ORBInitialHost", "gosling"); props.put("orb.omg.CORBA.ORBInitialPort", "2500"); ORB orb = ORB.init(args, props); Exercise In the file transfer application, the client (in both cases RMI and CORBA) needs to know the name of the file to be downloaded in advance. No methods are provided to list the files available on the server. As an exercise, you may want to enhance the application by adding another method that lists the files available on the server. Also, instead of using a command-line client you may want to develop a GUI-based client. When the client starts up, it invokes a method on the server to get a list of files then pops up a menu displaying the files available where the user would be able to select one or more files to be downloaded as shown in Figure 1. Figure 1: GUI-based File Transfer Client CORBA vs. RMI Code-wise, it is clear that RMI is simpler to work with since the Java developer does not need to be familiar with the Interface Definition Language (IDL). In general, however, CORBA differs from RMI in the following areas: - CORBA interfaces are defined in IDL and RMI interfaces are defined in Java. RMI-IIOP allows you to write all interfaces in Java (see RMI-IIOP). - CORBA supports inand outparameters, while RMI does not since local objects are passed by copy and remote objects are passed by reference. - CORBA was designed with language independence in mind. This means that some of the objects can be written in Java, for example, and other objects can be written in C++ and yet they all can interoperate. Therefore, CORBA is an ideal mechanism for bridging islands between different programming languages. On the other hand, RMI was designed for a single language where all objects are written in Java. Note however, with RMI-IIOP it is possible to achieve interoperability. - CORBA objects are not garbage collected. As we mentioned, CORBA is language independent and some languages (C++ for example) does not support garbage collection. This can be considered a disadvantage since once a CORBA object is created, it continues to exist until you get rid of it, and deciding when to get rid of an object is not a trivial task. On the other hand, RMI objects are garbage collected automatically. Conclusion Developing distributed object-based applications can be done in Java using RMI or JavaIDL (an implementation of CORBA). The use of both technologies is similar since the first step is to define an interface for the object. Unlike RMI, however, where interfaces are defined in Java, CORBA interfaces are defined in the Interface Definition Language (IDL). This, however, adds another layer of complexity where the developer needs to be familiar with IDL, and equally important, its mapping to Java. Making a selection between these two distribution mechanisms really depends on the project at hand and its requirements. I hope this article has provided you with enough information to get started developing distributed object-based applications and enough guidance to help you select a distribution mechanism.
http://blog.csdn.net/CanFly/article/details/13439
CC-MAIN-2018-09
en
refinedweb
Providing Access to Instance and Class Variables Don. Referring to Class Variables and Methods Avoid using an object to access a class (static) variable or method. Use a class name instead. For example: classMethod(); //OK AClass.classMethod(); //OK anObject.classMethod(); //AVOID! Constants Numerical constants (literals) should not be coded directly, except for -1, 0, and 1, which can appear in a for loop as counter values. Variable Assignments)should be written as ... } if ((c++ = d++) != 0) {Do not use embedded assignments in an attempt to improve run-time performance. This is the ... } job of the compiler. Example: d = (a = b + c) + r; // AVOID!should be written as a = b + c;Miscellaneous Practices d = a + r;)) // USE 2. Returning Values Try to make the structure of your program match the intent. Example: if (booleanExpression) {should instead be written as return true; } else { return false; } return booleanExpression;Similarly, if (condition) {should be written as return x; } return y; return (condition ? x : y);3. Expressions before ‘?’ in the Conditional Operator If an expression containing a binary operator appears before the ? in the ternary ?: operator, it should be parenthesized. Example: (x >= 0) ? x : -x; 4. Special Comments Use XXX in a comment to flag something that is bogus but works. Use FIXME to flag something that is bogus and broken.
http://www.javatutorialcorner.com/2013/09/java-best-practice-programming-practices.html
CC-MAIN-2018-09
en
refinedweb
Paradigm. At least one thing became clear at the Desktop Matters conference last week. The way that we write Swing applications is changing for the better. A few other things that became clear are that Chet has a great sense of humor to go along with a lot of patience for friendly ribbing; Hans really is a nice guy; and Romain has a startling talent for video clairvoyance (Chet, that race car demo was very...um, quaint). Almost gone are the days when every new Swing programmer will be left on their own standing in front of the huge maze of Swing with only the "Abandon hope all ye who enter" sign for company. Those of us who still remember our first Swing program probably remember something like this:> public class MyFirstReallyCoolSwingApp extends JFrame { public static void main( String[] args ) { myApp = new MyFirstReallCoolSwingApp(); ... } } EDT? Isn't that a home pregnancy test or something? So much for the old paradigm. A paradigm is only worth about 20 cents, anyway. Alrighty, moving right along then. These days there is, and has been for some time now, actually, a new life and energy behind desktop Java. Personally, I think this resurgance can be traced pretty easily if we look back now. I think we can thank Eclipse for showing the way. It came along at just the right time and provided us not only with the great Java development tool but also with the prototypical example of a great Java desktop application. Of course Eclipse simply could not have been so successful without the improvements made to the Java platform in general. That brings us to another thing that became clear to me at Desktop Matters: there are a lot of very smart people working at Sun. Okay, I haven't actually met very many of them, but this is certainly true of the Swing team. I am extrapolating from there. A short while later, Netbeans 5.0 put the lie to the "Swing is slow" line. It became abundantly clear to anyone who was really paying attention that desktop Java was an increasing priority at Sun. The release of all those desktop-related improvements in Java 6 was yet another data point in the rise of Swing's stock. And after hearing the Swing team talk about their plans for the future, it looks to me like things are still trending upwards. It is not clear to me what the business case is for this renewed commitment Sun has for desktop Java. What return does Sun expect on their investment? It certainly seems like there has been more corporate support but perhaps what we're seeing is simply the result of the energy and dedication of a small but talented team of Swing engineers? As just an outsider looking in, I won't pretend to know. But I'm also not going to look a gift horse in the mouth because, darn it, I really enjoy Swing programming. I love the flexibility of Swing. I love the power of Java2D. And there is some great work going on in SwingLabs. Now I'm not a Java2D wizard like Chris, I can't design ultra cool interfaces like Romain, and I certainly can't time things as well as Chet, but I do know that Swing will be a lot nicer to work with very soon thanks to JSRs 295 and 296 along with the support for them that is going into Netbeans. And this will apply to novice and experienced programmers alike. This year looks like it will be a continuation of the rise in excitement surrounding desktop Java. With the release of the Filthy Rich Clients book and the Netbeans RCP book as well as the upcoming focus on media, 3D, animation, and deployment in JDK 7, I just can't wait to see what we'll be talking about at next year's Desktop Matters conference. And one last thing. Since we were on the subject of my interface design impairment, do you suppose there is any chance of Romain putting together a UI design video podcast. You know, kind of like "Romain's Eye for the Programmer Guy" or something? Maybe that's not a good title. The pronunciation of the last word would always be in question. - Login or register to post comments - Printer-friendly version - diverson's blog - 1481 reads
https://weblogs.java.net/blog/diverson/archive/2007/03/paradigm_swing.html
CC-MAIN-2015-22
en
refinedweb
21 May 2010 19:00 [Source: ICIS news] TORONTO (ICIS news)--Europe will need more farm land and much higher yields as it shifts to a bio-based economy with agriculture at its centre, Germany’s chemical industry trade group VCI said on Friday. Agriculture faced three challenges, namely to provide food, biofuels for energy, as well as bio-based raw materials for chemicals, plastics and other industries, said BASF executive board member Stefan Marcinowski in an update. Marcinowski heads VCI’s biotechnology portfolio. This meant that crop output and yields needed to rise “dramatically,” he said. The EU’s “land set aside” policies - which has promoted the idling of agricultural lands to reduce surplus production - would quickly become a thing of the past, he said. “Industrial countries are poised for a ‘renaissance’ in term of agricultural lands, with a need for more arable land,” he said. For ?xml:namespace> However, in order to achieve a bio-based economy, The bio-economy would only work if it used and integrated all applications of biotechnology - an objective that had not yet been reached in Marcinowski called on the country's government to push for EU-wide comprehensive regulation to create a legally reliable framework for genetically modified products throughout the supply chain - including seeds, food products, animal feed and renewable raw materials. As it stood, “zero tolerance” for the development of genetically modified products came at a heavy cost and created competitive disadvantages for As one example, Marcinowski pointed to Europe’s imports of soybeans from Still, the EU commission’s recent approval for BASF’s genetically modified Amfora potato should be an encouraging signal for innovation in Meanwhile, Going forward, the majority of firms expected to increase their sales in the first half of 2010, he said, citing a VCI survey of 212 German bio-based firms. About half of firms surveyed planned to increase their research and development budgets, the survey found. Despite the eventual move toward a bio-based economy, the German biotech sector was currently facing problems raising capital, mainly because private venture firms had reduced their engagement in biotech and agriculture, he said.
http://www.icis.com/Articles/2010/05/21/9361771/germanys-chems-see-need-for-more-eu-farm-land-higher-yields.html
CC-MAIN-2015-22
en
refinedweb
SSL_set_session - set a TLS/SSL session to be used during TLS/SSL connect #include <openssl/ssl.h> int SSL_set_session(SSL *ssl, SSL_SESSION *session); call. If there is already a session set inside ssl (because it was set with SSL_set_session() before or because the same ssl was already used for a connection), SSL_SESSION_free() will be called for that session.: The operation failed; check the error stack to find out the reason. The operation succeeded. ssl, SSL_SESSION_free, SSL_get_session, SSL_session_reused, SSL_CTX_set_session_cache_mode
http://www.openssl.org/docs/ssl/SSL_set_session.html
CC-MAIN-2015-22
en
refinedweb
On Wed, Sep 19, 2012 at 09:57:10AM -0400, Laine Stump wrote: > These enums originally were put into the flags for virNetworkUpdate, > and when they were moved into their own enum, the numbers weren't > appropriately changed, causing the commands to start with value 2 > instead of 1. This causes problems for things like ENUM_IMPL, which > wants a string for every value in the requested range, including those > not used in the enum. > --- > include/libvirt/libvirt.h.in | 8 ++++---- > 1 file changed, 4 insertions(+), 4 deletions(-) > > diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in > index 84ac2d0..0f67cbb 100644 > --- a/include/libvirt/libvirt.h.in > +++ b/include/libvirt/libvirt.h.in > @@ -2356,10 +2356,10 @@ int virNetworkUndefine (virNetworkPtr network); > */ > typedef enum { > VIR_NETWORK_UPDATE_COMMAND_NONE = 0, /* (invalid) */ > - VIR_NETWORK_UPDATE_COMMAND_MODIFY = 2, /* modify an existing element */ > - VIR_NETWORK_UPDATE_COMMAND_DELETE = 3, /* delete an existing element */ > - VIR_NETWORK_UPDATE_COMMAND_ADD_LAST = 4, /* add an element at end of list */ > - VIR_NETWORK_UPDATE_COMMAND_ADD_FIRST = 5, /* add an element at start of list */ > + VIR_NETWORK_UPDATE_COMMAND_MODIFY = 1, /* modify an existing element */ > + VIR_NETWORK_UPDATE_COMMAND_DELETE = 2, /* delete an existing element */ > + VIR_NETWORK_UPDATE_COMMAND_ADD_LAST = 3, /* add an element at end of list */ > + VIR_NETWORK_UPDATE_COMMAND_ADD_FIRST = 4, /* add an element at start of list */ > #ifdef VIR_ENUM_SENTINELS > VIR_NETWORK_UPDATE_COMMAND_LAST > #endif ACK, assuming we have not released any version with the broken numbering Daniel -- |: -o- :| |: -o- :| |: -o- :| |: -o- :|
https://www.redhat.com/archives/libvir-list/2012-September/msg01362.html
CC-MAIN-2015-22
en
refinedweb
It's too late tonight, but monday morning I will likely be committing a major revamping of the buildworld code. It will do a number of things: STAGE1: * It compartmentalizes the bootstrap/buildtools from the cross-build setup from the world stage. Instead of unfathomable subdirectory names in /usr/obj/usr/src/* the stage directories are now flattened out and better named . e.g. btools_<currentarch> for bootstrap and build tools, ctools_<currentarch>_<targetarch> for the cross compiler tools, and world_<targetarch> for the main buildworld. * The build-tools will contain all tools required by the build, not just the ones which might have version/portability problems. This is so I can remove the base system path (e.g. like /usr/bin) from the main world build, which in turn prevents buildworld from accidently using programs that it isn't supposed to be using. I'd like to remove it from the cross-build stage too but I'd have to build the compiler in the build-tools stage to be able to do that and I haven't decided whether its worth the extra time yet or not. * The buildworld target will properly remove the entire buildworld object hierarchy. It turns out that it was only removing the world stage before, it wasn't removing the build tools and cross tools stages. * New targets to make incremental buildworlds easier will be introduced, e.g. quickworld and realquickworld. quickworld skips the build and cross tools, realquickworld skips the build tools, cross tools, and depend step. * The concept of platform-native compiled programs which are uneffected by the cross-build, and all Makefile's that generate these little helper programs now use the new concept. New suffixes have been introduced: '.no' for 'native object module' and '.nx' for 'native executable'. This is replacing the build-tools: target that existed in the tree. The problem is that the build-tools stage in the old build was polluting the world stage's namespace a bit more then it should have. This will primarily make cross-building less hackish, once we start doing cross builds. * Fix a bug in 'wmake', which simulates the buildworld environment for piecemeal compilation/testing. It was not using /usr/src/share/mk. * Additional .ORDER: constraints (not finished) STAGE2: * Fix .c.o and friends in sys.mk (specify -o ${.TARGET} instead of assuming that the target is named the same). * Continued messing around with .ORDER for -j N builds. * Cleanup passes -Matt
http://leaf.dragonflybsd.org/mailarchive/kernel/2004-03/msg00397.html
CC-MAIN-2015-22
en
refinedweb
Hi Matthew, I think that "technically" the answer is "no" however you could use federation (with a little teeny bit of tweaking). So it's possible to use qpid-route to federate between two brokers however there's a line in qpid-route that throws an exception if you try to do this -it moans about linking on the same host. However...... the broker actually happily allows this. I hacked the qpid-route addLink method thus... def getLink(self): links = self.agent.getObjects(_class="link") for link in links: if self.remote.match(link.host, link.port): return link return None def addLink(self, remoteBroker, interbroker_mechanism=""): self.remote = BrokerURL(remoteBroker) #if self.local.match(self.remote.host, self.remote.port): # raise Exception("Linking broker to itself is not permitted") brokers = self.agent.getObjects(_class="broker") broker = brokers[0] link = self.getLink() if link == None: So literally just commented out the test for self.local.match and the raise Exception and it works - one can federate from one exchange to another on the same broker. It's slightly controversial :-) but I wanted to see if it was possible. You'd want to be careful to avoid circular routes etc. But this approach might be what you're looking for unless someone can come up with a better mechanism. BTW federation only works with the C++ broker IIRC correctly so this might be an issue for you I believe that you were planning on using the Java broker (which gives you broker side message selectors) :-/ Frase On 02/05/12 18:42, m.luchak@smartasking.com wrote: > Hi All, > > Thanks for all your help with the selector questions that I had last week. We are conducting more tests to see if Qpid is the solution for our architecture but we have some doubts...Our flow requires that exchanges subscribe to exchanges and queues in turn subscribe to multiple exchanges. > > 1) Is it possible to bind Exchanges to Exchanges? I have seen some posts giving an emphatic NO: ([]) > > > 2) The x-binding syntax , that I have encountered, for binding a queue to multiple exchanges seems a little convoluted and the last time I asked you guys for help I discovered wonderful new simplified classes and methods :). Is the following example the best practice for creating multiple bindings? > > x-bindings:[{queue:MYQUEUE,exchange:'FIRST_EXCHANGE',key: 'binding1', arguments:{'x-match':any,'a':'10'}}, { queue:MYQUEUE ,exchange:'SECOND_EXCHANGE',key: 'binding2', arguments: {'x-match':any,'a':'10'}}] > > If so I would need to persist the "binding keys" (binding1, binding2) in order to remove the bindings later. > > > > thanks for all your help, > Matthew > --------------------------------------------------------------------- To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org For additional commands, e-mail: users-help@qpid.apache.org
http://mail-archives.apache.org/mod_mbox/qpid-users/201205.mbox/%3C4FA17E90.8060907@blueyonder.co.uk%3E
CC-MAIN-2015-22
en
refinedweb
The previous article in this series, Introduction to Play 2 for Java, introduced the Play 2 Framework, demonstrated how to set up a Play environment, and presented a simple Hello, World application. Here I expand upon that foundation to show you how to build a typical web application using Play’s Scala Templates and how Play implements a domain-driven design. We build a simple Widget management application that presents a list of widgets and allows the user to add a new widget, update an existing widget, and delete a widget. Domain-Driven Design If you come from a Java EE or from a Spring background, then you’re probably familiar with separating persistence logic from domain objects: A domain object contains the object’s properties, and a separate repository object contains logic for persisting domain objects to and from external storage, such as a database. Play implements things a little differently: The domain object not only encapsulates the object’s properties, but it also defines persistence methods. Play does not force you to implement your applications like this, but if you want your Play application to be consistent with other Play applications, then it is considered a best practice. Domain objects should be stored, by default, in a “models” package and follow the structure shown in listing 1. Listing 1. Structure of a Play domain object public class Widget { // Fields are public and getters and setters will be automatically generated and use, //e.g. id = "a" would be translated to setId("a") by Play public String id; public String name; // Finders are part of the domain object, but are static public static Widget findById( String id ) { ... } public static List<Widget> findAll() {...} // Update is part of the domain object and updates this instance with a command like JPA.em().merge(this); public void update() { ... } // Save is part of the domain object and inserts a new instance with a command like JPA.em().persist(this); public void save() { .. } // Delete is part of the domain object and deletes the current instance with a // command like JPA.em().remove(this); public void delete() { ... } } Play domain objects define all their fields as public and then Play implicitly wraps assignment calls with set() methods. It makes development a little easier, but it requires a little faith on your part to trust Play will protect your fields for you. All query methods are defined in the domain object, but they are static, so they do not require an object instance to execute them. You’ll typically see a findAll() that returns a list, findByXX() that finds a single object, and so forth. Finally, methods that operate on the object instance are defined as instance methods: save() inserts a new object into the data store, update() updates an existing object in the data store, and delete() removes an object from the data store. For our example we’ll bypass using a database and instead store the “cache” of data objects in memory as a static array in the Widget class itself. Note that this is not a recommended approach, but considering we’re learning Play and not JPA/Hibernate, this simplifies our implementation. Listing 2 shows the source code for the Widget class. Listing 2. Widget.java package models; import java.util.List; import java.util.ArrayList; public class Widget { public String id; public String name; public String description; public Widget() { } public Widget(String id, String name, String description) { this.id = id; this.name = name; this.description = description; } public static Widget findById( String id ) { for( Widget widget : widgets ) { if( widget.id.equals( id ) ) { return widget; } } return null; } public static List<Widget> findAll() { return widgets; } public void save() { widgets.add( this ); } public void update() { for( Widget widget : widgets ) { if( widget.id.equals( id ) ) { widget.name = name; widget.description = description; } } } public void delete() { widgets.remove( this ); } private static List<Widget> widgets; static { widgets = new ArrayList<Widget>(); widgets.add( new Widget( "1", "Widget 1", "My Widget 1" ) ); widgets.add( new Widget( "2", "Widget 2", "My Widget 2" ) ); widgets.add( new Widget( "3", "Widget 3", "My Widget 3" ) ); widgets.add( new Widget( "4", "Widget 4", "My Widget 4" ) ); } } The Widget class defines three instance variables: id, name, and description. Because a Widget is a generic term for a “thing,” its attributes are not important, so these will suffice. The bottom of Listing 2 defines a static ArrayList named widgets and a static code block that initializes it with some sample values. The other methods operate on this static object: findAll() returns the widgets list; findById() searches through all widgets for a matching one; save() adds the object to the widgets list; update() finds the widget with the matching ID and updates its fields; and delete() removes the object from the widgets list. Replace these method implementations with your database or NoSQL persistence methods and your domain object will be complete. Scala Templates With our domain object defined, let’s turn our attention to the application workflow. We’re going to create controller actions for the following routes: - GET /widgets: Returns a list of widgets - POST /widgets: Creates a new widget - GET /widget/id: Returns a specific widget - POST /widget-update/id: Updates a widget - DELETE /widget/id: Deletes a specific widget Let’s start by reviewing the GET /widgets URI, which returns a list of widgets. We need to add a new entry in the routes file: GET /widgets controllers.WidgetController.list() This route maps a call to GET /widgets to the WidgetController’s list() action method. The list() method is simple: It retrieves the list of Widgets by calling Widget.findAll() (that we created in the previous section) and sends that to our list template: public static Result list() { return ok( list.render( Widget.findAll()) ); } The list template (list.scala.html) accepts a list of Widgets and renders them in HTML, which is shown in Listing 3. Listing 3. list.scala.html @( widgets: List[Widget] ) <> </body> </html> The first line in Listing 3 tells Play that the template requires a List of Widgets and names it “widgets” on the page. The page builds a table that shows the Widget’s id, name, and description fields. The Scala notation for iterating over a collection is the @for command. The iteration logic is backward from Java’s notation but it reads: For all widget instances in the widgets collection do this. @for( widget <- widgets ) Now we have a widget variable in the body of the for loop that we can access by prefixing it with an at symbol (@): - @widget.id: Returns the widget’s ID - @widget.name: Returns the widget’s name - @widget.description: Returns the widget’s description We also added a link to the id that invokes the details action, which we review next. Note that rather than using the URI of the details action, we reference it through its routes value: @routes.WidgetController.details(widget.id) The details action accepts the ID of the Widget to display, loads the Widget from the Widget.findById() method, fills in a Form, and renders that form using the details template: private static Form<Widget> widgetForm = Form.form( Widget.class ); public static Result details( String id ) { Widget widget = Widget.findById( id ); if( widget == null ) { return notFound( "No widget found with id: " + id ); } // Create a filled form with the contents of the widget Form<Widget> filledForm = widgetForm.fill( widget ); // Return an HTTP 200 OK with the form rendered by the details template return ok( details.render( filledForm ) ); } The WidgetController class defines a Play form (Form<Widget>) object, which will be used to both render an existing Widget object as well as serve as a mechanism for the user to POST a Widget to the controller to create a new Widget object. (The full WidgetController is shown below.) The details() method queries for a Widget and, if it is found, then it fills the form by invoking the widget form’s fill() method, renders the form using the details template, and returns an HTTP OK response. If the Widget is not found, then it returns an HTTP 404 Not Found response by invoking the notFound() method. The routes file maps the following URI to the details action: GET /widget/:id controllers.WidgetController.details(id : String) We use the HTTP GET verb for the URI /widget/:id and map that to the details() method, which accepts the ID as a String. Listing 4 shows the source code for the details template. Listing 4. details.scala.html @(widgetForm: Form[Widget]) @import helper._ <!DOCTYPE html> <html> <head lang="en"> <meta charset="UTF-8"> <title></title> </head> <body> <h2>Widget</h2> @helper.form( action = routes.WidgetController.update() ) { @helper.inputText( widgetForm( "id" ), '_label -> "ID" ) @helper.inputText( widgetForm( "name" ), '_label -> "Name" ) @helper.inputText( widgetForm( "description" ), '_label -> "Description" ) <input type="submit" value="Update"> } </body> </html> The details template accepts a Form[Widget] and assigns it to the widgetForm variable. It uses a helper class to manage the form, so it imports helper._ (Scala notation of import helper.*). The helper class creates a form using its form() method and passes the action URI to which to POST the form body to, which in this case is the routes.WidgetController.update() action. The body of the form is wrapped in braces and inside the form you can see additional helper methods for creating and populating form elements. The inputText() method creates a form text field; it accepts the form field as its first parameter and the label as its second parameter. If the form has values in it, as it does in this case, then the value of the field is set in the form. Finally, the submit button is used to submit the form to update() action method. The resultant HTML for viewing “Widget 1” is the following: <!DOCTYPE html> <html> <head lang="en"> <meta charset="UTF-8"> <title></title> </head> <body> <h2>Widget</h2> <form action="/widget-update" method="POST" > <dl class=" " id="id_field"> <dt><label for="id">ID</label></dt> <dd> <input type="text" id="id" name="id" value="1" > </dd> <dd class="info">Required</dd> </dl> <dl class=" " id="name_field"> <dt><label for="name">Name</label></dt> <dd> <input type="text" id="name" name="name" value="Widget 1" > </dd> <dd class="info">Required</dd> </dl> <dl class=" " id="description_field"> <dt><label for="description">Description</label></dt> <dd> <input type="text" id="description" name="description" value="My Widget 1" > </dd> </dl> <input type="submit" value="Update"> </form> </body> </html> The next thing that we might want to do is add a new Widget to our list. Let’s augment our homepage (the list template) to add a form to the bottom of the page that allows our users to add new Widgets. As we did in the previous section, we need to create a Form object and pass it to the template, but this time the Form object should not be populated: private static Form<Widget> widgetForm = Form.form( Widget.class ); public static Result list() { return ok( list.render( Widget.findAll(), widgetForm ) ); } We already created the widgetForm in the previous example, but it is shown here again for completeness. When we render the list template, we’re going to pass the list of all Widgets as well as the unpopulated widgetForm. Listing 5 adds the new form to the list template. Listing 5. list.scala.html @( widgets: List[Widget], widgetForm: Form[Widget] ) @import helper._ <> <h2>Add New Widget</h2> @helper.form( action = routes.WidgetController.save() ) { @helper.inputText( widgetForm( "id" ), '_label -> "ID" ) @helper.inputText( widgetForm( "name" ), '_label -> "Name" ) @helper.inputText( widgetForm( "description" ), '_label -> "Description" ) <input type="submit" value="Add"> } </body> </html> Listing 5 updates the expected parameters to include both a List of Widgets as well as a Widget Form and then it imports the form helper classes. The bottom of Listing 5 creates the form using the helper.form() method, with the action directed to the routes.WidgetController.save() action. This form looks a whole lot like the form in Listing 4. To complete this example, we need to add one more feature: a delete link. Deleting web resources is accomplished by using the DELETE HTTP method, which we cannot simply invoke by adding a link or a form. Instead we need to make the call using JavaScript. Let’s add a new delete method to our routes file: DELETE /widget/:id controllers.WidgetController.delete(id : String) When the DELETE HTTP verb is passed to the /widget/id URI, then the WidgetController’s delete() action will be invoked: public static Result delete( String id ) { Widget widget = Widget.findById( id ); if( widget == null ) { return notFound( "No widget found with id: " + id ); } widget.delete(); return redirect( routes.WidgetController.list() ); } The delete() method finds the Widget with the specified ID and, if it is found, it deletes it by invoking the Widget’s delete() method; if the Widget is not found, then it returns a notFound() response (404) with an error message. Finally, the method redirects the caller to the WidgetController’s list() action. Listing 6 shows the final version of our list.scala.html template that includes the new delete button. Listing 6. list.scala.html (final) @( widgets: List[Widget], widgetForm: Form[Widget] ) @import helper._ @main( "Widgets" ) { <h2>Widgets</h2> <script> function del(url) { $.ajax({ url: url, type: 'DELETE', success: function(results) { // Refresh the page location.reload(); } }); } </script> <table> <thead><th>ID</th><th>Name</th><th>Description</th><th>Delete</th></thead> <tbody> @for( widget <- widgets ) { <tr> <td><a href="@routes.WidgetController.details(widget.id)"> @widget.id </a></td> <td>@widget.name</td> <td>@widget.description</td> <td><a href="#" onclick="javascript:del('@routes.WidgetController.delete(widget.id)')">Delete</a></td> </tr> } </tbody> </table> <h2>Add New Widget</h2> @helper.form( action = routes.WidgetController.save() ) { @helper.inputText( widgetForm( "id" ), '_label -> "ID" ) @helper.inputText( widgetForm( "name" ), '_label -> "Name" ) @helper.inputText( widgetForm( "description" ), '_label -> "Description" ) <input type="submit" value="Add"> } } Listing 6 may look a little strange compared to our previous listings, primarily because it is lacking the HTML headers and footers and the body is now wrapped in a main() method. When you created your project, Play created a main.scala.html file for you that accepts a title String and content as Html. Listing 7 shows the contents of the main.scala.html template. Listing 7. main.scala.html @(title: String)(content: Html) <> </head> <body> @content </body> </html> The title is pasted in as the <head> <title> element and the content is pasted inside the <body> of the document. Furthermore, the header imports JQuery for us, which you’ll find in the public/javascripts folder – we need JQuery to simplify our Ajax delete call. If you want to maintain a consistent look-and-feel to your pages, you should add styling information, menus, and other common resources to the main template and then wrap your other templates in a call to main(). Listing 6 adds a delete link to each Widget with the following line: <td><a href="#" onclick="javascript:del('@routes.WidgetController.delete(widget.id)')">Delete</a></td> We invoke the del() JavaScript method, which was shown in Listing 6, passing it the route to the WidgetController’s delete() action and the id of the Widget to delete. The del() method uses JQuery’s ajax() method to invoke the specified URL with the specified type (HTTP DELETE verb) and a success method to invoke upon completion (reload the page). When you’ve completed your code, launch your application using the play command from your application’s home directory and execute run from the play shell. Open a browser to the following URL and take it for a spin: You can download the source code for this article here. Summary The Play Framework is not a traditional Java web framework and actually requires us to think about developing web applications differently. It runs in its own JVM, not inside a Servlet container, and it supports instant redeployment of applications without a build cycle. When building Play applications you are required to think in terms of HTTP and not in terms of Java. The “Introduction to Play 2 for Java” article presented an overview of Play, showed how to set up a Play environment, and then built a Hello, World application. Here we built a more complicated Play application that manages CRUD (create, read, update, and delete) operations for a Widget, which uses Play’s domain-driven paradigm, and that better utilizes Play’s Scala templates. The final article, Integrating Play with Akka, integrates Play with Akka to realize the true power of asynchronous messaging and to show how to suspend a request while waiting for a response so that we can support more simultaneous requests than a traditional Java web application.
http://www.informit.com/articles/article.aspx?p=2223715
CC-MAIN-2015-22
en
refinedweb
Ancillary Objects: Separate Debug ELF Files For Solaris By Ali Bahrami on Nov 26, 2012 ELF objects contain allocable sections, which are mapped into memory at runtime, and non-allocable sections, which are present in the file for use by debuggers and observability tools, but which are not mapped or used at runtime. Typically, all of these sections exist within a single object file. Ancillary objects allow them to instead go into a separate file. There are different reasons given for wanting such a feature. One can debate whether the added complexity is worth the benefit, and in most cases it is not. However, one important case stands out customers with very large 32-bit objects who are not ready or able to make the transition to 64-bits. We have customers who build extremely large 32-bit objects. Historically, the debug sections in these objects have used the stabs format, which is limited, but relatively compact. In recent years, the industry has transitioned to the powerful but verbose DWARF standard. In some cases, the size of these debug sections is large enough to push the total object file size past the fundamental 4GB limit for 32-bit ELF object files. The best, and ultimately only, solution to overly large objects is to transition to 64-bits. However, consider environments where: - Hundreds of users may be executing the code on large shared systems. (32-bits use less memory and bus bandwidth, and on sparc runs just as fast as 64-bit code otherwise). - Complex finely tuned code, where the original authors may no longer be available. - Critical production code, that was expensive to qualify and bring online, and which is otherwise serving its intended purpose without issue. DesignThe design of ancillary objects is intended to be simple, both to help human understanding when examining elfdump output, and to lower the bar for debuggers such as dbx to support them. - The primary and ancillary objects have the same set of section headers, with the same names, in the same order (i.e. each section has the same index in both files). - A single added section of type SHT_SUNW_ANCILLARY is added to both objects, containing information that allows a debugger to identify and validate both files relative to each other. Given one of these files, the ancillary section allows you to identify the other. - Allocable sections go in the primary object, and non-allocable ones go into the ancillary object. A small set of non-allocable objects, notably the symbol table, are copied into both objects. - As noted above, most sections are only written to one of the two objects, but both objects have the same section header array. The section header in the file that does not contain the section data is tagged with the SHF_SUNW_ABSENT section header flag to indicate its placeholder status. - Compiler writers and others who produce objects can set the SUNW_SHF_PRIMARY section header flag to mark non-allocable sections that should go to the primary object rather than the ancillary. - If you don't request an ancillary object, the Solaris ELF format is unchanged. Users who don't use ancillary objects do not pay for the feature. This is important, because they exist to serve a small subset of our users, and must not complicate the common case. - If you do request an ancillary object, the runtime behavior of the primary object will be the same as that of a normal object. There is no added runtime cost. The primary and ancillary object together represent a logical single object. This is facilitated by the use of a single set of section headers. One can easily imagine a tool that can merge a primary and ancillary object into a single file, or the reverse. (Note that although this is an interesting intellectual exercise, we don't actually supply such a tool because there's little practical benefit above and beyond using ld to create the files). Among the benefits of this approach are: - There is no need for per-file symbol tables to reflect the contents of each file. The same symbol table that would be produced for a standard object can be used. - The section contents are identical in either case there is no need to alter data to accommodate multiple files. - It is very easy for a debugger to adapt to these new files, and the processing involved can be encapsulated in input/output routines. Most of the existing debugger implementation applies without modification. - The limit of a 4GB 32-bit output object is now raised to 4GB of code, and 4GB of debug data. There is also the future possibility (not currently supported) to support multiple ancillary objects, each of which could contain up to 4GB of additional debug data. It must be noted however that the 32-bit DWARF debug format is itself inherently 32-bit limited, as it uses 32-bit offsets between debug sections, so the ability to employ multiple ancillary object files may not turn out to be useful. Using Ancillary Objects (From the Solaris Linker and Libraries Guide) By default,. The non-allocable sections in an object are not loaded into memory at runtime by the operating system, and so, they have no impact on memory use or other aspects of runtime performance no matter their size. For convenience, both allocable and non-allocable sections are normally maintained in the same file. However, there are situations in which it can be useful to separate these sections. To reduce the size of objects in order to improve the speed at which they can be copied across wide area networks. To support fine grained debugging of highly optimized code requires considerable debug data. In modern systems, the debugging data can easily be larger than the code it describes. The size of a 32-bit object is limited to 4 Gbytes. In very large 32-bit objects, the debug data can cause this limit to be exceeded and prevent the creation of the object. To limit the exposure of internal implementation details. Traditionally, objects have been stripped of non-allocable sections in order to address these issues. Stripping is effective, but destroys data that might be needed later. The Solaris link-editor can instead write non-allocable sections to an ancillary object. This feature is enabled with the -z ancillary command line option. $ ld ... -z ancillary[=outfile] ... By default, the ancillary file is given the same name as the primary output object, with a .anc file extension. However, a different name can be provided by providing an outfile value to the -z ancillary option. When -z ancillary is specified, the link-editor performs the following actions. All allocable sections are written to the primary object. In addition, all non-allocable sections containing one or more input sections that have the SHF_SUNW_PRIMARY section header flag set are written to the primary object. All remaining non-allocable sections are written to the ancillary object. The following non-allocable sections are written to both the primary object and ancillary object. - .shstrtab The section name string table. - .symtab The full non-dynamic symbol table. - .symtab_shndx The symbol table extended index section associated with .symtab. - .strtab The non-dynamic string table associated with .symtab. - .SUNW_ancillary Contains the information required to identify the primary and ancillary objects, and to identify the object being examined. The primary object and all ancillary objects contain the same array of sections headers. Each section has the same section index in every file. Although the primary and ancillary objects all define the same section headers, the data for most sections will be written to a single file as described above. If the data for a section is not present in a given file, the SHF_SUNW_ABSENT section header flag is set, and the sh_size field is 0. This organization makes it possible to acquire a full list of section headers, a complete symbol table, and a complete list of the primary and ancillary objects from either of the primary or ancillary objects. The following example illustrates the underlying implementation of ancillary objects. An ancillary object is created by adding the -z ancillary command line option to an otherwise normal compilation. The file utility shows that the result is an executable named a.out, and an associated ancillary object named a.out.anc. $ cat hello.c #include <stdio.h> int main(int argc, char **argv) { (void) printf("hello, world\n"); return (0); } $ cc -g -zancillary hello.c $ file a.out a.out.anc a.out: ELF 32-bit LSB executable 80386 Version 1 [FPU], dynamically linked, not stripped, ancillary object a.out.anc a.out.anc: ELF 32-bit LSB ancillary 80386 Version 1, primary object a.out $ ./a.out hello world The resulting primary object is an ordinary executable that can be executed in the usual manner. It is no different at runtime than an executable built without the use of ancillary objects, and then stripped of non-allocable content using the strip or mcs commands. As previously described, the primary object and ancillary objects contain the same section headers. To see how this works, it is helpful to use the elfdump utility to display these section headers and compare them. The following table shows the section header information for a selection of headers from the previous link-edit example. The data for most sections is only present in one of the two files, and absent from the other file. The SHF_SUNW_ABSENT section header flag is set when the data is absent. The data for allocable sections needed at runtime are found in the primary object. The data for non-allocable sections used for debugging but not needed at runtime are placed in the ancillary file. A small set of non-allocable sections are fully present in both files. These are the .SUNW_ancillary section used to relate the primary and ancillary objects together, the section name string table .shstrtab, as well as the symbol table.symtab, and its associated string table .strtab. It is possible to strip the symbol table from the primary object. A debugger that encounters an object without a symbol table can use the .SUNW_ancillary section to locate the ancillary object, and access the symbol contained within. The primary object, and all associated ancillary objects, contain a .SUNW_ancillary section that allows all the objects to be identified and related together. $ elfdump -T SUNW_ancillary a.out a.out.anc a.out: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0x8724 a.out.anc: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0xfbe2 The ancillary sections for both objects contain the same number of elements, and are identical except for the first element. Each object, starting with the primary object, is introduced with a MEMBER element that gives the file name, followed by a CHECKSUM that identifies the object. In this example, the primary object is a.out, and has a checksum of 0x8724. The ancillary object is a.out.anc, and has a checksum of 0xfbe2. The first element in a .SUNW_ancillary section, preceding the MEMBER element for the primary object, is always a CHECKSUM element, containing the checksum for the file being examined. The presence of a .SUNW_ancillary section in an object indicates that the object has associated ancillary objects. The names of the primary and all associated ancillary objects can be obtained from the ancillary section from any one of the files. It is possible to determine which file is being examined from the larger set of files by comparing the first checksum value to the checksum of each member that follows. Debugger Access and Use of Ancillary Objects Debuggers and other observability tools must merge the information found in the primary and ancillary object files in order to build a complete view of the object. This is equivalent to processing the information from a single file. This merging is simplified by the primary object and ancillary objects containing the same section headers, and a single symbol table. The following steps can be used by a debugger to assemble the information contained in these files. Starting with the primary object, or any of the ancillary objects, locate the .SUNW_ancillary section. The presence of this section identifies the object as part of an ancillary group, contains information that can be used to obtain a complete list of the files and determine which of those files is the one currently being examined. Create a section header array in memory, using the section header array from the object being examined as an initial template. Open and read each file identified by the .SUNW_ancillary section in turn. For each file, fill in the in-memory section header array with the information for each section that does not have the SHF_SUNW_ABSENT flag set. The result will be a complete in-memory copy of the section headers with pointers to the data for all sections. Once this information has been acquired, the debugger can proceed as it would in the single file case, to access and control the running program. Note - The ELF definition of ancillary objects provides for a single primary object, and an arbitrary number of ancillary objects. At this time, the Oracle Solaris link-editor only produces a single ancillary object containing all non-allocable sections. This may change in the future. Debuggers and other observability tools should be written to handle the general case of multiple ancillary objects. ELF Implementation Details (From the Solaris Linker and Libraries Guide)To implement ancillary objects, it was necessary to extend the ELF format to add a new object type (ET_SUNW_ANCILLARY), a new section type (SHT_SUNW_ANCILLARY), and 2 new section header flags (SHF_SUNW_ABSENT, SHF_SUNW_PRIMARY). In this section, I will detail these changes, in the form of diffs to the Solaris Linker and Libraries manual. Part IV ELF Application Binary Interface Chapter 13: Object File Format Object File Format Edit Note: This existing section at the beginning of the chapter describes the ELF header. There's a table of object file types, which now includes the new ET_SUNW_ANCILLARY type. e_typeIdentifies the object file type, as listed in the following table. Sections Edit Note: This overview section defines the section header structure, and provides a high level description of known sections. It was updated to define the new SHF_SUNW_ABSENT and SHF_SUNW_PRIMARY flags and the new SHT_SUNW_ANCILLARY section. ... sh_typeCategorizes the section's contents and semantics. Section types and their descriptions are listed in Table 13-5.sh_flagsSections support 1-bit flags that describe miscellaneous attributes. Flag definitions are listed in Table 13-8.... ... SHT_LOSUNW - SHT_HISUNWValues in this inclusive range are reserved for Oracle Solaris OS semantics.SHT_SUNW_ANCILLARYPresent when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section. ... ... SHF_SUNW_ABSENTIndicates that the data for this section is not present in this file. When ancillary objects are created, the primary object and any ancillary objects, will all have the same section header array, to facilitate merging them to form a complete view of the object, and to allow them to use the same symbol tables. Each file contains a subset of the section data. The data for allocable sections is written to the primary object while the data for non-allocable sections is written to an ancillary file. The SHF_SUNW_ABSENT flag is used to indicate. SHF_SUNW_PRIMARYThe without regard for its allocable status. ... Two members in the section header, sh_link, and sh_info, hold special information, depending on section type. Special Sections Edit Note: This section describes the sections used in Solaris ELF objects, using the types defined in the previous description of section types. It was updated to define the new .SUNW_ancillary (SHT_SUNW_ANCILLARY) section. Various sections hold program and control information. Sections in the following table are used by the system and have the indicated types and attributes. ... .SUNW_ancillaryPresent when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section for details. ... Ancillary Section Edit Note: This new section provides the format reference describing the layout of a .SUNW_ancillary section and the meaning of the various tags. Note that these sections use the same tag/value concept used for dynamic and capabilities sections, and will be familiar to anyone used to working with ELF. In addition to the primary output object, the Solaris link-editor can produce one or more ancillary objects. Ancillary objects contain non-allocable sections that would normally be written to the primary object. When ancillary objects are produced, the primary object and all of the associated ancillary objects contain a SHT_SUNW_ancillary section, containing information that identifies these related objects. Given any one object from such a group, the ancillary section provides the information needed to identify and interpret the others. This section contains an array of the following structures. See sys/elf.h. For each object with this type, a_tag controls the interpretation of a_un.For each object with this type, a_tag controls the interpretation of a_un.typedef struct { Elf32_Word a_tag; union { Elf32_Word a_val; Elf32_Addr a_ptr; } a_un; } Elf32_Ancillary; typedef struct { Elf64_Xword a_tag; union { Elf64_Xword a_val; Elf64_Addr a_ptr; } a_un; } Elf64_Ancillary;The following ancillary tags exist.The following ancillary tags exist. - a_val - These objects represent integer values with various interpretations. - a_ptr - These objects represent file offsets or addressesC_SUNW_NULL - Marks the end of the ancillary section. - ANC_SUNW_CHECKSUM - Provides the checksum for a file in the c_val element. When ANC_SUNW_CHECKSUM precedes the first instance of ANC_SUNW_MEMBER, it provides the checksum for the object from which the ancillary section is being read. When it follows an ANC_SUNW_MEMBER tag, it provides the checksum for that member. - ANC_SUNW_MEMBER - Specifies an object name. The a_ptr element contains the string table offset of a null-terminated string, that provides the file name. Related Other WorkThe GNU developers have also encountered the need/desire to support separate debug information files, and use the solution detailed at. At the current time, the separate debug file is constructed by building the standard object first, and then copying the debug data out of it in a separate post processing step, Hence, it is limited to a total of 4GB of code and debug data, just as a single object file would be. They are aware of this, and I have seen online comments indicating that they may add direct support for generating these separate files to their link-editor. It is worth noting that the GNU objcopy utility is available on Solaris, and that the Studio dbx debugger is able to use these GNU style separate debug files even on Solaris. Although this is interesting in terms giving Linux users a familiar environment on Solaris, the 4GB limit means it is not an answer to the problem of very large 32-bit objects. We have also encountered issues with objcopy not understanding Solaris-specific ELF sections, when using this approach. The GNU community also has a current effort to adapt their DWARF debug sections in order to move them to separate files before passing the relocatable objects to the linker. The details of Project Fission can be found at. The goal of this project appears to be to reduce the amount of data seen by the link-editor. The primary effort revolves around moving DWARF data to separate .dwo files so that the link-editor never encounters them. The details of modifying the DWARF data to be usable in this form are involved please see the above URL for details.
https://blogs.oracle.com/ali/entry/ancillary_objects_separate_debug_elf
CC-MAIN-2015-22
en
refinedweb
You don't have to create neither DirectRouter.ashx nor DirectApi.ashx. They are not physical files. These values are used by Ext.Direct.Mvc to define two routes - one to generate API, and one to route Direct requests. DirectHandler.ashx (or whatever value you used in web.config) simply defines URL to be used by Ext.Direct to make AJAX requests against. Just follow the steps in the first post precisely, and you'll be fine. The direct tab still fails I am starting to get a handle on this, but I still haven't set up my project correctly. The project elishnevsky has setup works, but now I am trying to copy concepts from this page to another website, and I am missing something. I need something simpler to start, so I have just deleted out most of the content except for a panel and a "Hello World" button. I have deliberately used different namespaces because this forces me to understand where everything is coming from. I suspect that I have some problems with referring to the correct objects on this line (in Site.Master <%= Html.ActionLink("Direct", "DirectPage", "Direct")%></li> It fails in the following code: protectedoverridevoid ExecuteCore() { if (ControllerContext.RouteData.RouteHandler isDirectMvcRouteHandler) { this.ActionInvoker = newDirectMethodInvoker(); } elseif (IsDirectMethodCall()) { thrownewInvalidOperationException("This controller action can only be executed by Ext.Direct."); } base.ExecuteCore(); } This is the error I most often get:A public action method 'DirectPage' could not be found on controller 'Ext.Direct.Mvc.DirectController'. I have once gotten this error (with a prior edit of the code), but I haven't figured out how to duplicate it: This controller action can only be executed by Ext.Direct I have figured some of this out on my own This new file is now correctly implements "Hello World" on the Direct tab. I was getting confused about namespace names, view names, and so on. Perhaps this example will help others since it is very simple. I am now working on implementing the CustomerOrders tab using ext.direct. As for your previous post, keep in mind, that every action in a controller, that inherits from DirectController, is considered a Direct method, unless marked with [DirectIgnore] attribute. Such controller actions should only be used by Ext.Direct and cannot be executed in any other way, except in a Direct request context. As for your last post, I don't understand your confusion. Can you please describe your problem? I am new to the MVC pattern as well as to ExtJS. So almost everything about this is unfamiliar. So I was getting confused between Ext.Direct actions, view names, controllers, especially since I was renaming everything on purpose so such confusions WOULD happen and I'd have to work them out. I also had to rename the test project namespaces so I'd have more clarity while editing to know which project was the test bed and which was your reference code. I had to restore your original project from the downloaded zip file because I accidently overwrote it with some of my testbed code. I have two more weeks of down time to get my head wrapped around this, and it appears to be a great way to implement my upcoming project which will be converting some very complex asp.net pages that were written very badly in highly repetitive ASP classic and were ported in the most work-intensive but intellectually lazy way possible to ASP.Net 2003 a long time ago. I want to avoid intellectual laziness this time and do modern Web 2.0 coding. Part of that is becoming part of the open source community and ExtJS appears to be very robust and elegantly designed, so this looks like a great place to be. I am reasonably good at Javascript, but the ExtJS style of defining things with functions inside functions, and internal complex nested arrays is not exactly the way I was used to building or using my objects. It is a very compact way of coding but it will take a few more days for me to start doing this with some confidence. Well, all I can say is "good luck" There's many good starting points to learn Ext JS. Check out the links in my signature. As for ASP.NET MVC, there's the official site as well as some blogs, for example this one or this one. elishnevsky Hello elishnevsky, Great contribution works well out of the box. Was wondering, any support for unit testing? Or at a minimum the ability to call the methods directly for testing if they are not attributed with ignore? Cheers, Timothy Code: Test.AddNumbers(10, 33, function(result, response) { console.info(result); }); Thanks elishnevsky -- I was hoping to avoid using the console but hey ... it's better than nothing Again, cheers for the wicked job on the this contribution! Cheers, Timothy Hello elishnevsky, Was just wondering how you would deal with a form that has more than 25 fields to submit to an action, I don't want to have to write a server side action that has 25 parameters for each one. Any recommendations? Thanks again for your help, much appreciated Cheers)
https://www.sencha.com/forum/showthread.php?72245-Ext.Direct-for-ASP.NET-MVC/page5
CC-MAIN-2015-22
en
refinedweb
Review of semantic annotation proposals Contents XLink referencing named individuals In the next three next references, XLink @href is generally used within a property GML element (starting with a lower case letter) to create a link to an individual (this is a correct use of XLink). In some cases, the link points to an element which is or could be managed in an ontology if an appropriate URN redirection mechanism was set up. Sheth (2009): - Sheth Semantic Sensor Markup of Data and Services SSN-XG briefing - the xlink:href attributes are attached to "property elements" like om:procedure or swe:component - Example: <om:Procedure xlink: - the xlink:href attribute refers to an individual: <weather:PrecipitationSensor rdf: - the second example is similar (the convention used is that a name starting by "_" is an individual) Bermudez (2009): - Luis Bermudez Enriching SOS services with Ontologies - OOSTethys/OceansIE and MMI SSN-XG briefing - <sos:procedure xlink: - Q.: is there a reference which allows/documents the use of # in URNs? - <sos:observedProperty xlink: - and sea_water_temperature is defined as an individual in cf.owl by <Standard_Name rdf: New types of semantic Annotations (Sapience) Janowicz et al. also use xlink:href to point to an individual (we assume Anemometer01 and WindDirection are individuals). It is interesting to note the presence of an xlink:role attribute (which should be an xlink:arcrole) pointing to sawsdl:modelreference. This suggests that a SAWSDL-based approach could be used in such a case, although there is no reference to a required lifting script in the paper. The last example should not be interpreted as a semantic annotation. Janowicz (2009): - Janowicz et al. (2009; forthcoming): Semantic Enablement for Spatial Data Infrastructures. Transactions in GIS. - <om:procedure xlink: - <om:observedProperty xlink: - <om:featureOfInterest xlink: Sapience is a free and open source API for semantic annotations. It consists of several Java libraries to control the lookup and injection of reference links into predefined metadata. Service developers can forward their serialized XML document to Sapience which then adds the appropriate links into the provided documents and returns the new versions to the source application. The annotations instructions are stored and looked up in a database - Sapience does not support the authoring of annotations so far. Once a (feature type) schema, as well as the external references to a global ontology are generated and uploaded by the service developer Sapience can add semantic annotations to the service and hence provide Semantic Enablement [1]. So far, Sapience is mostly focused on the annotation geospatial data compliant with the Open Geospatial Consortium (OGC) standards and especially for KML. + Note: preliminary review of Sapience and its use of annotation attributes like modelReference and domainReference [2] Sapience supports the dynamic and standard-compliant injection of references to external shared vocabularies into (meta)-data (Maue) and SAWSDL-like model reference to XML schema. Three types of annotation properties can be used (see also Listing 1, 10, 11, 13 in OGC 08-167r1 and the [Annotation ontology): - modelReference reuse (or extends?) the SAWSDL standards - domainReference and measureReference are new types of annotations: - domainReference handles the relation to the domain model, a "relation for connecting elements of a resource ontology with domain ontologies" and "supports information retrieval tasks". It may be related to the corresponding usage in WSML (see Listing 1 in OGC 08-167r1) - measureReference handles the relation to the reference space, a "relation for connecting attributes of a resource ontology, like FTOs, with semantic datums for attributes" Maue, Schade, et al. (2009): - Maue et al. : OGC Discussion Paper "Semantic Annotations in OGC Standards OGC 08-167r1" - SAPIENCE: Semantic Annotations API - Patrick Maue: Presentation about sapience (as well as SWRL spatial and the concept repository) held at the OGC meeting in December 2009 at Google. - Sebastien Schade Computer-Tractable Translation of Geospatial DataArticle under Review for the International Journal of Spatial Data Infrastructures Research, submitted 2009-09-01 - Sebastien Schade: Ontology-Driven Translation of Geospatial Data (material for PhD Thesis) - Annotation (This ontology captures the old and new relations for implementing annotations.) - Patrick Maué: User Feedback in the Geospatial Semantic Web [ Proc. of ECIR 2009 Workshop Geographic Information on the Internet] Toulouse April 2009 (pp 71-76) The documentation includes the description of two distinct use cases for Sapience The tutorial and java libraries are directed at users of KML. The WSDL module is at an earlier phase of development (initial upload of source code to the Subversion repository) In Sapience, the injection is the process of adding references to external supplementary documentation to local metadata. The page describing the Injection Procedure does not say if the injection step only corresponds to the addition of annotations in the same format (e.g. XML) or if it also covers the lifting step (as in SAWSDL's definition of "lifting"). This example, Generating OWL instances from the feature/annotation model and apply SWRL spatial built-ins suggests that the lifting operation would allow to exploit the added semantics (e.g. the spatial relations) with the help of SWRL rules (or equivalent). Note: analysis to be completed - sawsdl:modelReference originates with the SAPIENCE project. + Note: sawsdl:modelReference comes from Section 2.1 of the SAWSDL specification [17] - Maue 2009 [3] (pp 71-76) defines the difference between a modelReference and a domainReference annotation *see below). Maue (2009): Semantic annotations establish twofold links. First, the Model Reference links from the schema-based data model to the local Resource Ontology which can be interpreted by query processing algorithms. This ensures that semantic annotations can be applied to all kinds of geospatial resources, as long as we are able to map from the data model to the resource ontology. The Domain Reference relates the local concepts to global domain ontologies capturing the common vocabulary within one community. It is usually based on rules which properly reflect the inner relationships of the data model. (see also slide 61 in [4]) +Open issue (20100114 Laurent_Lefort): there is an assumption in SAPIENCE that a two-step mapping annotation mechanism (schema to ontology and then local (resource ontology) to global reference (domain ontology)) is used. The question here is whether the proposed mechanism should also covers cases the annotation is done in a single step (which I think is what SAWSDL was developed for). Could we have a specification which says that we can chain these properties with an axiom like: sapience:modeReference o sapience:modelreference sub-property-of sawsdl:modelReference - question: Is this related to the definition of mapping rules in the WSML community? (if yes, reference to add - but I have a suspicion the answer is no) +Note: evaluate the terms defined in Appendix A Glossary of OGC Discussion Paper "Semantic Annotations in OGC Standards OGC 08-167r1 - Annotation: Attaching descriptive metadata to a resource, with the intention to simplify its discovery and evaluation. - Model reference: A link between an element in the schema-based metadata and a concept in the → knowledge model. The model reference is used to link to an entity with equivalent semantics but a different encoding. Use of URNs There is a range of OGC documentation on how to manage URNs but it is difficult to map these best practice guidelines with the corresponding Semantic Web best practice guidelines. One issue is that there is not yet "reference" implementations of URN resolvers which bridge the gap between the two worlds. - Definition identifier URNs in OGC namespace 07-092r1 - URN Policy Documents (OGC) - Register of Def types for URNs (OGC) - guidance material published by the Marine Metadata Initiative on this topic URNs can theoretically be mapped to the three possible types of ontology/vocabulary definitions: classes, properties or individuals. In general, they correspond to individuals/instances (e.g in the EPSG registry). Issue: - is there anything in the OGC specifications which limit the use of URNs to named instances? - No: a URN (like any other kind of URI) identifies a _resource_. The kind of resource depends on the context: so, the next question is what are the different context in which a URN may be used - Should OGC specifications be improved to define the different types of context for the different type s of URNs and enable more robust validation mechanisms (when the URN is mapped to a semantic web definition, it is possible to know if it corresponds to a class, a property or an individual). Here are some examples to help us to identify different contexts (Note: there is a convention followed in GML schemas which is less rigorously applied in SWE and SensorML - It is important to identify the request for changes issued to fix those discrepancies). @definition in swe:Quantity: <swe:Quantity @definition: Points to semantics information defining the precise nature of the component (from xsd via [5]) This looks like the reference to an ObjectProperty in OWL: - urn:ogc:def:property - the name starting with lower case letter @uom Example 1: SensorML tutorial [6] (OGC-member-only) or [7] <Quantity definition=“urn:ogc:def:phenomenon:temperature” uom=“urn:ogc:def:unit:celsius”/> Expectation that the URN would match an individual. Note: uom is a special case in OGC schemas - for concision and convenience, it appears as an attribute so it is not easily possible to annotate it specifically if we only have a generic mechanism for XML elements. @xlink:href in Process Example 1: SensorML tutorial [8] <process name="thermometer" xlink: Should be Process element instead of process? Expectation that the URN would match an individual. @definition in sml:Term and text() in sml:value: <sml:Term <sml:value>urn:x-ogc:object:sensor:MBARI:GPS:17:HVS</sml:value> </sml:Term> Example 2 (52North [10]): <sml:Term <sml:value>urn:ogc:object:feature:Sensor:IFGI:ifgi-sensor-3</sml:value> </sml:Term> <sml:Term <sml:codeSpace xlink: <sml:value>6-meter NOMAD buoy</sml:value> </sml:Term> Example 4: (SensorML/Alexandre Robin, 2007 [1]) <sml:Term <sml:codeSpace xlink: <sml:value>RADAR</sml:value> </sml:Term> Example 5: (Luis Bermudez, public-xg-ssn 2009 [12]) <sml:Term <sml:value></sml:value> </sml:Term> The convention here is that the @definition points to an ObjectProperty (but maybe it can also be DatatypeProperty) and the value contains an indiviudal or a literal. @xlink:href in sml:codeSpace <sml:codeSpace xlink: Example 4: (SensorML/Alexandre Robin, 2007 [1]) <sml:codeSpace xlink: The two examples above are not comparable but it is possible to infer from them that the use of xlink:href in the "codespace context" may differ from other context. @xlink:role (?) in swe:field: same issue Ex of a property URN (here, it is not clear if @arcrole should be used instead): <swe:field @srsName pointing a URN: <gml:pos681426 6630902</gml:pos> GML spec. (Wikipedia) [14]. Usage of @href (to an individual) generally correct. Dual use of URNs The community developing and using OGC standard plans to use SKOS to manage vocabulary elements. This evolution will lead to situations where non-semantically augmented OGC files will contain references to URNs (which correspond to SKOS concepts or even the "cool URIs" of this concepts) in any of the contexts discussed above. The issue here is "is there a risk of clash between this usage and the added semantic annotations"? Tentative use of RDFa Barnaghi et al (2009) is a tentative to add semantic annotations to a SWE-based example which uses a syntax which is somewhat intermediate between what's used in OWL and in RDFa (see comments below). - Barnaghi et al. Sense and Sens'ability: Semantic Data Modelling for Sensor Networks, in Proc. of the ICT Mobile Summit 2009, June 2009. - SWE’s @definition mapped to class - OWL-like attribute namespaces (non ok) - @about mapped to individual (ok) - @datatype mapped to xsd type (ok) - @resource used but without corresponding @property - @ID used (not ok) - URI conventions (not ok)
http://www.w3.org/2005/Incubator/ssn/wiki/index.php?title=Review_of_semantic_annotation_proposals&oldid=2625
CC-MAIN-2015-22
en
refinedweb
On Thu, 2006-07-13 at 12:14 -0600, Eric W. Biederman wrote:> Maybe. I really think the sane semantics are in a different uid namespace.> So you can't assumes uids are the same. Otherwise you can't handle open> file descriptors or files passed through unix domain sockets.Eric, could you explain this a little bit more? I'm not sure Iunderstand the details of why this is a problem?-- Dave-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
https://lkml.org/lkml/2006/7/13/253
CC-MAIN-2015-22
en
refinedweb
Type: Posts; User: RileyDeWiley I will answer my own question for benefit of the search engines ... I was working on development hardware in which the device ID and vendor ID were set to pre-release default values. The right way to... I have two instances of a given device on a given machine (actually they are different devices but they use the same driver). I need to know which of these two instances I am talking to. The... I am writing a user level app that needs to get the memory range used by a driver. To be clear, the value I need can be viewed in the Device Manager on the Resources tab of the Properties page, so I... I can't post the code, but can tell you what is going on. There are three pages that matter: -Landing page, on which you login; -Start page, which has a link to page X; -Page X, which has a... I am dealing with a bug related to using history (back button or backspace key) to revisit a page. The bug seems to be related to cookie presentation (the bug is that a login page is shown if the... I needed FieldInfo.DeclaringType. Thank me very much .. I have some code that uses one base class and about 100 (yes, really!) derived classes. I wish to put into the base class some code that will serialize/deserialize any derived class. To do this I... <whine> But the types for which I instantiate it do in fact have a Parse() method... can't the compiler wait and see what I am using it for? A c++ compiler would. </whine> I have a workaround. ... When I compile this: using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace App1 { I have some code I am porting from c++ to c#, and have gotten into a corner. I know there is an easy way out but don't see it. In C++: char * sBuf = malloc(100 * sizeof(char)); ... Error 1 'System.Windows.Forms.DataGridView' does not contain a definition for 'DataBind' Has to be something like that, though ... I have a DataGridView object on a form and wish to populate it from a list or array (I have used List<>, ArrayList, and BindingSource to no avail). The basic problem is that no data appears after... I need to serialize / deserialize the unmanaged class. That gonna work? Riley I have a problem that must be fairly common for c#-ies. I have a nifty set of unmanaged c++ classes, and a new requirement that the classes must be xml-serializable. So I have some choices: ...
http://forums.codeguru.com/search.php?s=cb845bc330f73c60e4e047066d9dce3d&searchid=7001531
CC-MAIN-2015-22
en
refinedweb
28 August 2008 18:35 [Source: ICIS news] NEW DELHI (ICIS news)--India’s Essar Gujarat Petrochemicals Limited (EGPL) would set up a phenol plant based on a cumene process developed by Italy’s Polimeri Europa (PE) and offered by Lummus Technologies of the US, a government source said on Thursday. The plant would have capacity to produce 200,000 tonnes/year of phenol and 120,000 tonnes/year of co-product acetone, an Indian government official said. The Indian Rupees(Rs)7bn ($160m) plant would also produce intermediate cumene by alkalization of benzene with propylene. The plant would form part of the EGPL’s petrochemical complex that is scheduled for commissioning by April 2012 at Vadinar in ?xml:namespace> According to EGPL, the cumene process uses a unique zeolite catalyst named PBE-1 developed by PE, which is a 100% owned subsidiary of EGPL has filed an application with the Indian Government, seeking approval for its technical collaboration for phenol plant with Lummus, which has the right to license PE’s cumene process to other companies. EGPL would pay $8.6m to Lummus as fees for supply of know-how and basic engineering package, the official said. ($1 = Rs43.64) For more on cum
http://www.icis.com/Articles/2008/08/28/9152416/indias-essar-to-set-up-phenol-plant-in-gujarat.html
CC-MAIN-2015-22
en
refinedweb
Introduction Guest Author This is a guest post by Daniel Koestler, an Adobe applications developer. This post will explain how to connect your Flash, Flex, and AIR apps to Photoshop using the Photoshop Touch SDK. The author created the Photoshop Touch SDK for AS3 with help from Renaun Erickson, an Adobe developer evangelist. This part of the SDK is a SWC distributed in the freely available download. This article will tell you how to create a new project, connect to Photoshop, and send simple commands back and forth. There are additional resources at the end of the article, which will guide you through more advanced steps. What is the Photoshop Touch SDK? The Photoshop Touch SDK is a collection of APIs that allow virtually any device to connect to and control Photoshop, using any Internet or WiFi connection. For the first time, you can interface with Photoshop directly, and use this to create mobile, desktop, or web applications that are tailored to the needs of creative professionals or casual-creative users. The Photoshop Touch SDK is available for free from Adobe, and works with Photoshop CS5 12.0.4 and above. It also includes a SWC library, which contain the APIs that this article covers. This SWC library, called the Photoshop Touch SDK for AS3, allows you to write very simple ActionScript 3 code in any Flash, AIR, or Flex application, and saves you from doing tedious socket-level work. As you’ll hopefully discover, these AS3 APIs are flexible and easy-to-use, and will allow you to leverage the portability of Flash, versatility of Flex, and power of ActionScript 3 to help you realize your vision for designing creative apps. Sample Code As you follow along, you may want to refer to the sample code, which contains a project that’s been created by following this blog post. See the Additional Resources section for information about an upcoming ADC article, which will also cover more advanced topics. Table of Contents: Introduction Requirements Creating a Project Connecting to Photoshop Sending Commands to Photoshop Summary Additional Resources) () Creating a Project Step 1: Create a new Flex Mobile. Connecting to Photoshop Overview: - Create a new PhotoshopConnectionand listen for events - Call connect()on your instance of the PhotoshopConnection - After a successful connection, initialize encryption using either initEncryption()or (if you’ve saved the user’s key with getKey()), initEncryptionFromKey() After these steps have been completed, you may send and receive data with Photoshop. As we code these features into the mobile application, we’ll create data structures that will allow you to easily add functionality later in this article. Step 1: Create a singleton Model, to establish a MVC design Our application needs to create a PhotoshopConnection, but we want to store it in a location where it can be conveniently accessed by various parts of our UI (the View in the Model-View-Controller design pattern). Thus, we’ll create a Model in which to store Object references, constants, and variables. - Right click your project in Flash Builder, and choose New ActionScript Class - Enter the string “ model” as the package - Name the Class “ Model“ - Click Finish We now need to add a static variable to this class, a function called getInstance() which returns that variable, and, finally, a Bindable, public variable that will store our PhotoshopConnection. Enter the following code inside of public class Model { ... } private static var _inst:Model; [Bindable] public var photoshopConn:PhotoshopConnection; public function Model() { } public static function getInstance():Model { if ( !_inst ) { _inst = new Model(); } return _inst; } We can now reference the variable photoshopConn from either AS3 or Flex code, simply by calling Model.getInstance() and referencing photoshopConn. I.e., Model.getInstance().photoshopConn. Step 2: Instantiate the PhotoshopConnection and listen for events We’ll instantiate the PhotoshopConnection the first time the user attempts to connect, but it would be a good idea to create initialization code in your own applications, to handle things like reading the hostname and password from disk, managing the user’s key and preferences, etc. Open your views/LoginView.mxml file.> </fx:Script> You’ll see that we’ve created two TextInput components and a Button, as well as an fx:Script tag that will contain some click-handler logic. When this button is pressed,> We’ve created a function called createNewConnection(), which is called if the photoshopConn variable is null., we now have to create the functions onConnected, onEncrypted, and onError, in which, we’re ready to try and connect to Photoshop. It’s always a good idea to remove event listeners when you’re not using them, however, so create a function called cleanUp(), and remove each of those three event listeners from the photoshopConn instance. We’ll call this function once we’re ready to switch Views in the application (after successfully encrypting the connection). Step 3: Call connect(): </p.. Step 4: Initialize encryption, and move on to the next View. As the docs indicate, a successful connection will cause PhotoshopConnection to dispatch a PhotoshopEvent.CONNECTED event. Since we’re listening for this event, our function onConnected will be called. It’s here that, our code will enter the onEncrypted() event handler. At that point, we’re ready to send data to and from Photoshop. To prepare for this step: - Right click your project and select New MXML Component - Put it in the package “ views,” and name it “ HomeView“ - Click “Finish” Now, we just have to push a HomeView onto the ViewNavigator: private function onEncrypted(pe:PhotoshopEvent):void { trace("Encryption was successful. Cleaning up event listeners."); this.cleanUp(); trace("Proceeding to the 'Home' View."); this.navigator.pushView(HomeView); } We’ve also cleaned up the event listeners, which you should do wherever possible to prevent memory leaks. You should now test your project. In the next section, we’ll send some simple commands to Photoshop. Sending Commands to Photoshop At this point in your application, you’ve used the Photoshop Touch SDK to establish an encrypted connection to Photoshop. You’re ready to send and receive data. With a single function call, you can push raw bytes to Photoshop, and—if you wish we’re just beginning, we will indeed use the simplest of these method calls. We’ll create an s:Button in our HomeView that tells Photoshop to create a new document. Photoshop will respond with an id referencing the document. Step 1: Create a MessageDispatcher instance Before we can use the MessageDispatcher, we have to create a new instance of it, and give it a reference to our existing PhotoshopConnection (this allows the MessageDispatcher to use the connection that we initialized in the previous section). We’ll store this instance in the Model, just like we do with the photoshopConn variable. Thus, in your Model.mxml file, add the following: [Bindable] public var messageDisp:MessageDispatcher; The Bindable property allows us to use this variable in Flex and/or attach our own ChangeWatchers, should the need arise.. We’re now ready to use this Object. Step 2: Listen for Photoshop’s Response(s) We could send the command at this point, but, should an error occur, our application would never hear about it. Thus, it’s necessary to attach some event listeners to the PhotoshopConnection. There are three events that a number of other useful events, such as MessageSentEvent, ProgressEvent, and ImageReceivedEvent, but. Step 3: Send a Message to Photoshop, we’ll tell the MessageDispatcher to dispatch a Message to Photoshop. Model.getInstance().messageDisp.createNewDocument(); Pay particular attention to the default parameters in that function call. As the ASDocs say: we’re creating a relatively simply application, we don’t need the added flexibility that comes with managing our own transaction IDs. we no longer need. Summary At this point you’ve been shown how to: create a project; link to the Photoshop Touch SDK libraries; set up a Model-View architecture for managing the Photoshop objects; connect to Photoshop and manage encryption; and send messages while listening for responses. There are still a number of tasks that you may want your application to perform, and the SDK can help you with these. For example, you can use the SDK to: - Listen for foreground and background color changes - Listen for tool change events - Be notified when the user modifies a document - Change the brush size, the currently selected tool, or the document’s properties - Send other, custom commands These tasks are made possible by using the SubscriptionManager, TransactionManager and Photoshop’s ScriptListener plug-in. Please see Daniel Koestler’s ADC article and blog to learn about these tasks, and to get tutorials and sample code that’ll help you take your applications further. Additional Resources An ADC article covering the content of this blog post (as well as more advanced topics) will be available next week. Please check Daniel Koestler’s blog, where he’ll post the article as soon as it’s available. You may also want to follow him on Twitter: @antiChipotle. Update 6/16/2011: The ADC Article is now published. That article contains some additional information about using the Photoshop Touch SDK. You may want to download the sample code, which contains a project that has been created following the above steps. The ADC article contains code that demonstrates the SubscriptionManager, custom messages, and other, more advanced tasks. Very useful article… Thanks Question: How can I discover available Photoshop connections? Rather than providing an IP address to connect to that computer, I want to see all the available Photoshop connections to connect with. Like you guys have done with these three apps (Eazel, Color Lava, Nav). Sounds cool but i got this. [IOErrorEvent type=”ioError” bubbles=false cancelable=false eventPhase=2 text=”Error #2031: Socket Error. URL: xxx.xxx.x.xx” errorID=2031] (i am using the photoshop CS5 12.1 trial) At what point do you get that event? When you attempt to connect, or when you attempt to send a command? login attempt *** private function onError(pe:PhotoshopEvent):void { trace( pe.data ); trace(“There was an error while connecting!”); } *** [SWF] ADCTutorial.swf – 3 354 931 octets après la décompression [IOErrorEvent type=”ioError” bubbles=false cancelable=false eventPhase=2 text=”Error #2031: Socket Error. URL: xxx.xx.x.xx” errorID=2031] There was an error while connecting! *** There could be a number of things causing that IOErrorEvent, so we need more information. The Photoshop Touch SDK uses the StructuredLogTestingSDK for its internal logging. We can turn it on and see some information that’ll help you debug. You first have to set up a trace target, to have the SDK log via trace statements: var traceTarget:TraceTarget = new TraceTarget(); traceTarget.includeCategory = true; traceTarget.includeDate = true; traceTarget.includeLevel = true; traceTarget.includeTime = true; Then call two static functions on SLog; the first function will add the trace target, and the second will test the logger’s ability to output: SLog.addTarget(traceTarget); SLog.debug(this,"Logging initialized."); You may have to link to StructureLogTestingSDK.swc. You can get it here: If that link stops working, its homepage is: Pingback: John Nack on Adobe : News for Suite developers I’m always getting error 2031 when trying to connect to any socket from Flex Mobile projects, I tried to serve crossdomain.xml, security policy files – no luck. Sockets are broken in Flex Mobile, I gave up. Hey John, Would you be able to provide me with a project where you keep getting #2031? crossdomain.xml and policy files shouldn’t be necessary, though it’s possible you’re trying to connect in a way I didn’t anticipate. (Did you try the usual things, such as switching networks, disabling firewalls, manually verifying the IP, etc?) -Dan The reason for the error #2031 is the lack of enabling Photoshop to accept incoming credentials and failing to set a password. To correct this issue, you must edit your Photoshop settings. To allow remote connections: Open Photoshop. Choose the Edit menu. Choose the “Remote Connections…” option. You may leave the Service Name as is, or you can specify a different one. Enter a password, at least 6 characters in length. Check the box to “Enable Remote Connections”. Click OK. Now when you run the AIR app from this tutorial, your app will be able to connect to Photoshop, properly authenticate, and communicate. Also, be sure to enter your correct password in the Login window in this app when you run it. Alternatively, you can also edit the source to pre-populate your password by changing the text value of the text input field. Hello, I want to know how to send a Image to the Photoshop? Because I had send an Image to the Photoshop, but I don’t know how to let the Photoshop to open this Image.I don’t know Where can I find this Image. If you can give me an example, it’s best! Thank you! If you have Photoshop installed on your system, you can use File > Open from within Photoshop and navigate to your image files to open them. If you’re on a Mac, then you can also drag an image to the Photoshop icon on your Dock On Windows 7, you can rightclick an image and choose “Open With > Adobe Photoshop CS6″ Hi. One of my computers is not showing the IPv4 address inside Photoshop (office network). Any reason for that? I can’t connect with a prototype I’m working on. Another question, there is any way to get a list of a discovered connections? You know, some applications like Acquire do this. Thanks! Hi Daniel, We’d need more info like OS/platform. Does Photoshop connect to the network correctly if you choose Help>Photoshop Support Center… from Photoshop? If you haven’t already, I’d post more details on the SDK/companion app forum: Hi Jeffrey, Thanks for your reply! Yes, it is connecting correctly with the network. And apps like ‘Live View’ or ‘Acquire’ are connecting correctly with it. That’s why I was asking about ‘discovered connections’. I’m running PS 13.1.2 on OSX 10.7.5 I will check the forum
http://blogs.adobe.com/crawlspace/2011/05/connecting-to-photoshop-with-flash-flex-and-air-2.html
CC-MAIN-2015-22
en
refinedweb
On Thu, 2 Sep 2010, Jeremy Fitzhardinge wrote:> On 08/30/2010 04:20 AM, Stefano Stabellini wrote:> > Hi all,> > this.> > This series is based on Konrad's pcifront series (not upstream yet):> >> >> >> > and requires a patch to xen and a patch to qemu-xen (just sent to> > xen-devel).> > My only concern with this series is the pirq remapping stuff. Why do> pirq and irq need to be non-identical? Is it because pirq is a global> namespace, and dom0 has already assigned it?>> Why do guests need to know about max pirq? Would it be better to make> Xen use a more dynamic structure for pirqs so that any arbitrary value> can be used?> No, pirq is a per-domain namespace, but pirq and irq are conceptuallydifferent: pirqs are used by xen as a reference for interrupts ofdevices assigned to the guest, while linux uses irqs for itsinternal purposes.The pirq namespace is chosen by xen while the linux irq namespace ischosen by linux.Linux is allowed to choose the pirq number it wants when mapping aninterrupt, this is why linux needs to know the max pirq, so that it cansafely chose a pirq that is in the allowed range.The difference between pirqs and linux irqs increases when we talk aboutPV on HVM guests: in this case qemu also maps interrupts in the guestsgetting pirqs in return, so the linux kernel has to be able to cope withalready assigned pirq numbers.The current PHYSDEVOP_map_pirq interface is already flexible enough forthat because it provides the possibility for the caller to let xenchoose the pirq, something that linux never does in the pure PV case,but it is still possible. Obviously if you let xen choose the pirqnumber you are safe from conflicts but you must be able to cope withpirq numbers different from linux irq numbers.
http://lkml.org/lkml/2010/9/3/265
CC-MAIN-2015-22
en
refinedweb
01 November 2010 08:23 [Source: ICIS news] SINGAPORE (ICIS)--UK-based Elementis said on Monday it expects its earnings for the full year to be ahead of market expectations after posting a 11% year-on-year increase in sales volumes of its specialty products in the third quarter. Sales volumes of specialty products to the oilfield sector surged 84% year on year in the third quarter, largely due to the increase in shale gas drilling in ?xml:namespace> Meanwhile, sales volumes in personal care rose 55% year on year, or 17% higher excluding its acquisition of cosmetics raw material supplier Fancor that was concluded in December last year, it said. In Asia Pacific, sales volumes of coating additives rose 8% year on year as the region continued to experience strong growth in “higher value differentiated products”, Elementis said. The company’s specialty products business has continued to extend its overall operating margin due to selective price increases and product mix optimisation, it said, adding that operating margins for the third quarter of 2010 were higher then for the first six months of the year. Elementis’ chromium business, meanwhile, posted an improvement in its operating margins in the third quarter versus the first six months of the year, as overall customer demand boosted its capacity utilisation rates. “As a result of the progress outlined above, earnings for the full year are expected to be ahead of market expectations,” the company said. “Net debt has continued to decline since 30 June 2010; otherwise there has been no material change in the group's financial position, which remains
http://www.icis.com/Articles/2010/11/01/9406095/uks-elementis-full-year-earnings-to-be-ahead-of-expectations.html
CC-MAIN-2015-22
en
refinedweb
When I compile this code : #include <mmintrin.h> __m64 moo(int i) { __m64 tmp = _mm_cvtsi32_si64(i); return tmp; } With (GCC) 4.0.0 20050116 like so: gcc -O3 -S -mmmx moo.c I get this (without the function pop/push etc) movd 12(%ebp), %mm0 movq %mm0, (%eax) However, if I use the -msse flag instead of -mmmx, I get this: movd 12(%ebp), %mm0 movq %mm0, -8(%ebp) movlps -8(%ebp), %xmm1 movlps %xmm1, (%eax) gcc 3.4.2 does not display this behavior. I didn't get the chance to test it on my Linux installation yet, but I'm pretty sure it's going to give the same results.. I didn't use any special flags configuring or building gcc (just ../gcc-4.0-20050116/configure --enable-languages=c,c++ , and make bootstrap) With -O0 flag instead of -O3, we see that it seems that gcc replaced some movq's by movlps's (why??) and they do not get cancelled out during optimization.. I will attach the .i file generated by "gcc -O3 -S -msse moo.c". I also tried a "direct conversion": __m64 tmp = (__m64) (long long) i; But I get a compiler error: internal compiler error: in convert_move, at expr.c:367 Created attachment 7991 [details] gcc -O3 -S -msse moo.c --save-temps Hmm, looking at the rtl dumps this looks like the register allocator sucks as the sse register is picked in the -msse but in the -mmmx, only the mmx register is picked. Someone needs to take an axe to the register allocator :). Subject: Bug 19530 CVSROOT: /cvs/gcc Module name: gcc Changes by: rth@gcc.gnu.org 2005-01-20 18:34:13 Modified files: gcc : ChangeLog gcc/config/i386: i386.c mmintrin.h mmx.md Log message: PR target/19530 * config/i386/mmintrin.h (_mm_cvtsi32_si64): Use __builtin_ia32_vec_init_v2si. (_mm_cvtsi64_si32): Use __builtin_ia32_vec_ext_v2si. * config/i386/i386.c (IX86_BUILTIN_VEC_EXT_V2SI): New. (ix86_init_mmx_sse_builtins): Create it. (ix86_expand_builtin): Expand it. (ix86_expand_vector_set): Handle V2SFmode and V2SImode. * config/i386/mmx.md (vec_extractv2sf_0, vec_extractv2sf_1): New. (vec_extractv2si_0, vec_extractv2si_1): New. Patches: Fixed. MMX intrinsics don't seem to be a standard (?), but I'm under the impression that _mm_cvtsi32_si64 is supposed to generate MMX code. I just tested With (GCC) 4.0.0 20050123, and with -mmmx flag, the result is still the same, with the -msse flag I now get : movss 12(%ebp), %xmm0 movlps %xmm0, (%eax) Which is correct, but what I'm trying to get is a MOVD so I don't have to fish back into memory to use the integer I wanted to load in an mmx register. Or is there another way to generate a MOVD? Also, _mm_unpacklo_pi8 (check moo2.i) still generates superfluous movlps : punpcklbw %mm0, %mm0 movl %esp, %ebp subl $8, %esp movl 8(%ebp), %eax movq %mm0, -8(%ebp) movlps -8(%ebp), %xmm1 movlps %xmm1, (%eax) I guess any MMX intrinsics that makes use of the (__m64) cast conversion will suffer from the same problem..... I think the fix to all these problems would be to prevent the register allocator from using SSE registers when compiling MMX intrinsics.. ? Created attachment 8055 [details] the _mm_unpacklo_pi8 one Hmm. Seems to only happen with -march=pentium3, and not -march=pentium4... Sorry, but this appears to be unfixable without a complete rewrite of MMX support. Everything I tried had side effects where MMX instructions were used when we were not using MMX intrinsics. I'm wondering, would there be a #pragma directive that would we could use to surround the MMX instrinsics function, and that would prevent the compiler from using the XMM registers?? Even stranger, it doesn't do it with -march=athlon either... only -march=pentium, pentium2 or pentium3... ? That seems like some weird bug here. There musn't be a THAT big of a difference between the code for pentium3 and the one for athlon right? Oh oh, I think I'm getting somewhere... if I use both -march=athlon and -msse flags I get the "bad" code. Let me summarize this : -march=pentium3 = bad -msse = bad -march=athlon = good (ie.: no weird movss or movlps, everything looks good) however -march=athlon -msse = bad hum... (In reply to comment #10) > That seems like some weird bug here. There musn't be a THAT big of a difference > between the code for pentium3 and the one for athlon right? Well, duh, athlon doesn't have sse. Ok ok, SSE is not enabled by default on Athlon... So, is there some sort of "pragma" that could be used to disable SSE registers (force -mmmx sort of) for only part of some code? The way I see it, the problem seems to be that gcc views __m64 and __m128 as the same kind of variables, when they are not. __m64 should always be on mmx registers, and __m128 should always be on xmm registers. Actually, Intel created a new type __m128d, instead of trying to guess which out of integer or float instructions one should use for stuff like MOVDQA.. We can easily see that gcc is trying to put an __m64 variable on xmm registers in moo2.i . I can also prevent it from using an xmm register by using only __v8qi variables (which are invalid ie.: too small on xmm registers): __v8qi moo(__v8qi mmx1) { mmx1 = __builtin_ia32_punpcklbw (mmx1, mmx1); return mmx1; } tadam! no movss or movlps... Shouldn't gcc not try to place __m64 variables on xmm registers? If one wants to use an xmm register, one should use __m128 or __m128d (or at least a cast from a __m64 pointer), even on the Pentium 4, I think it makes sense, because moving stuff from mmx registers to xmm registers is not so cheap either.. If one wants to move one 32 bit integer to a mmx register, that should be the job of a specialized intrinsics (_mm_cvtsi32_si64) which maps to a MOVD instruction. And if one wants to load a 64 bit something into an xmm register, that should be the job of _mm_load_ss (and other such functions). At the moment, these intrinsics (_mm_cvtsi32_si64, _mm_load_ss) do NOT generate a mov instruction by themselves.. they go through a process (from what I can understand of i386.c) of "vector initialization" which starts generating mov instructions from MMX, SSE or SSE2 sets without discrimination... In my mind _mm_cvtsi32_si64 should generate a MOVD, and _mm_load_ss a MOVSS, period. Just like __builtin_ia32_punpcklbw generates a PUNPCKLBW. Does it make sense? Is this what you mean by a complete rewrite or were you thinking of something else? > So, is there some sort of "pragma" that could be used to disable SSE > registers(force -mmmx sort of) for only part of some code? No. > __m64 should always be on mmx registers, and __m128 should always be on > xmm registers. Well, yes and no. Given SSE2, one *can* implement *everything* in <mmintrin.h> with SSE registers. > I can also prevent it from using an xmm register by [...] ... doing something complicated enough that, for the existing patterns defined by the x86 backend, it very much more strongly prefers the mmx registers. Your problems with preferencing will come only when the register in question is only used for data movement. Which, as can be seen in your _mm_unpacklo_pi8 test case, can happpen at surprising times. There are *two* registers to be register allocated there. The one that does the actual unpack operation *is* forced to be in an MMX register. The other moves the result to the return register, and that's the one that gets mis-allocated to the SSE register set. > If one wants to move one 32 bit integer to a mmx register, that should be the > job of a specialized intrinsics (_mm_cvtsi32_si64) which maps to a MOVD > instruction. With gcc, NONE of the intrinsics is strict 1-1 mapping to ANY instruction. > Does it make sense? Is this what you mean by a complete rewrite or were you > thinking of something else? Gcc has some facilities for generic vector operations. Ones that don't use any of the foointrin.h header files. When that happens, the compiler starts trying to use the MMX registers. But it doesn't know how to place the necessary emms instruction, which is Bad. At the moment, the only way to prevent this from happening is to strongly discourage gcc from using the MMX registers to move data around. This is done in such a way that the only time it will use an MMX register is when we have no other choice. Which is why you see the compiler starting to use SSE registers when they are available. You might think that we could easily use some pragma or something when <mmintrin.h> is in use, since it's the user's responsibility to call _mm_empty when necessary. Except that due to how gcc is structured internally, you'd not be able to be 100% certain that all of the mmx data movement remained where you expected. Indeed, we have open PRs where this kind of movement is in fact shown to happen. Thus the ONLY solution that is sure to be correct is to teach the compiler what using MMX registers means, and where to place emms instructions. Which is the subject of the PR against which this PR is marked as being blocked. This cannot be addressed in any satisfactory way for 4.0. Frankly, I wonder if it's worth addressing at all. To my mind it's just as easy to write pure assembly for MMX. And pretty soon the vast majority of ia32 machines will have SSE2, and that is what the autovectorizer will be targeting anyway. PS, your best solution, for now, is simply to use -mno-sse for the files in which you have mmx code. Move the sse code to a separate file. That really is all I can do or suggest. Ok, so from what I gather, the backend is being designed for the autovectorizer which will probably only work right with SSE2 (on x86 that is), as mucking with emms will probably bring too much trouble. Second, we can do any MMX operations on XMM registers in SSE2. So the code for SSE2 does not need to be changed optimization wise for intrinsics. As for a pragma or something, could we for example disable the automatic use of such instructions as movss, movhps, movlps, and the likes on SSE1 (if I may call it that way)? That would most certainly prevent gcc from trying to put __m64 in xmm registers however eager it might want to mov it there... (would it?) And supply a few built-ins to implement manual use of those instructions. I guess such a solution would be nice, although I realize it might not be too kosher ;) I use MMX to load char * arrays into shorts and convert them into float in SSE registers, to process them with float * arrays, so I can't separate the MMX code from the SSE code... Of course, with the way things look at the moment, I might end up writing everything in assembler by hand, but scheduling 200+ instructions (yup yup I have some pretty funky code here) by hand is no fun at all, especially if (ugh when) the algorithm changes. Also, the same code in C with intrinsics can target x86-64 :) yah, that's cool I think I'm starting to see the problem here... I tried to understand more of the code, and from this and what you tell me, gcc find registers to use and then finds instructions to that fits the bill. So preventing gcc from using some instructions will only end up in a "instruction not found" error. The register allocator is the one that shouldn't allocate them in the first place, right? Well, let's forget this for now... maybe we should look at the optimization stages: movq %mm0, -8(%ebp) movlps -8(%ebp), %xmm1 movlps %xmm1, (%eax) <- If movlps merely moves 64 bit stuff around, why wasn't it optimized out to a one equivalent movq that also moves 64 bit stuff around? Would that be an optimizer problem instead? Hum, there apparently seems to be a problem with the optimization stages.. I cooked up another snippet : void moo(__m64 i, unsigned int *r) { unsigned int tmp = __builtin_ia32_vec_ext_v2si (i, 0); *r = tmp; } With -O0 -mmmx we get: movd %mm0, -4(%ebp) movl 8(%ebp), %edx movl -4(%ebp), %eax movl %eax, (%edx) Which with -O3 gets reduced to: movl 8(%ebp), %eax movd %mm0, (%eax) Now, clearly it understands that "movd" is the same as "movl", except they work on different registers on an MMX only machine. With "movlps" and "movq" it should do the same I think? If the optimization stages can work this out, maybe we wouldn't need to rewrite the MMX/SSE1 support... (BTW, correction, when I said 200+ instructions to schedule, I meant per function. I have a dozen such functions with 200+ instructions, and it ain't going to get any smaller) Hum, ok we can do a "movd %mm0, %eax", that's why it gets combined... Well, I give up. The V8QI (and whatever) -> V2SI conversion seems to be causing all the trouble here if we look at the RTL of something like: __m64 moo(__v8qi mmx1) { mmx1 = __builtin_ia32_punpcklbw (mmx1, mmx1); return mmx1; } It explicitly asks for a conversion to V2SI (__m64) that gets assigned to an xmm register afterwards: (insn 15 14 17 1 (set (reg:V8QI 58 [ D.2201 ]) (reg:V8QI 62)) -1 (nil) (nil)) (insn 17 15 18 1 (set (reg:V2SI 63) (subreg:V2SI (reg:V8QI 58 [ D.2201 ]) 0)) -1 (nil) (nil)) (insn 18 17 19 1 (set (mem/i:V2SI (reg/f:SI 60 [ D.2206 ]) [0 <result>+0 S8 A64]) (reg:V2SI 63)) -1 (nil) (nil)) So... the only way to fix this would be to either make the register allocator more intelligent (bug 19161), or to provide intrinsics like the Intel compiler does with one to one mapping to instructions directly. right? That wouldn't be such a bad idea, I think... instead of using the current __builtins for stuff in *mmintrin.h, we could use a different set of builtins that only supports V2SI and nothing else..? Well, that's going to be for another time ;) Is the emms issue mentioned in comment #14 fixed with Uros' patch proposed here:? Hum, it will be interesting to test this (it will have to wait a couple of weeks), but the problem with this here is that there is no "mov" instructions that can move stuff between MMX registers and SSE registers (MOVQ can't do it). In SSE2, there is one (MOVQ), but not in the original SSE. So the compiler generates movlps instructions from/to memory from/to SSE registers along MMX calculations, and, in the original SSE case, ends up not being able to reduce anymore than MMx->memory->XMMx->memory->MMx again for data that should have stayed in MMX registers all along... it does not realize up front how expensive it is to use XMM registers on "SSE1" along with MMX instructions. As this bug is getting a bit confused, I have summarised testcases below: --cut here-- #include <mmintrin.h> __m64 moo_1 (int i) { __m64 tmp = _mm_cvtsi32_si64 (i); return tmp; } __m64 moo_2 (__m64 mmx1) { __m64 mmx2 = _mm_unpacklo_pi8 (mmx1, mmx1); return mmx2; } void moo_3 (__m64 i, unsigned int *r) { unsigned int tmp = __builtin_ia32_vec_ext_v2si (i, 0); *r = tmp; } --cut here-- I think that the problems described were fixed by PR target/21981. With a patch from comment #20, 'gcc -O2 -msse3' produces following asm: moo_1: pushl %ebp movl %esp, %ebp movd 8(%ebp), %mm0 popl %ebp ret moo_2: pushl %ebp punpcklbw %mm0, %mm0 movl %esp, %ebp popl %ebp ret moo_3: pushl %ebp movl %esp, %ebp movl 8(%ebp), %eax movd %mm0, (%eax) emms popl %ebp ret I have checked, that there is no SSE instructions present for any of testcases for -mmmx, -msse, -msse2 and -msse3. I suggest to close this bug as fixed and eventually open a new bug with new testcases. Regarding emms in moo_3: As the output of moo_3 () is _not_ a mmx register, FPU mode should be switched to 387 mode before function exit. (In proposed patch, this could be overriden by -mno-80387 to get rid of all emms insns.) Yup, excited, today, I just compiled the mainbranch to check this out (gcc-4.1-20050618) and it seems to be fixed! I don't see any strange movlps in any of the code I tried to compile with it. Can be moved to FIXED (I'm not sure I should be to one to switch it??) Thanks to Uros and everybody!
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=19530
CC-MAIN-2015-22
en
refinedweb
On 08/23/2013 01:18 PM, Chen Hanxiao wrote: > From: Chen Hanxiao <chenhanxiao cn fujitsu com> > > If we don't enable network namespace, we could shutdown host > by executing command 'shutdown' inside container. > This patch will add some warnings in LXC docs and give some > advice to readers. > > Signed-off-by: Chen Hanxiao <chenhanxiao cn fujitsu com> > --- ACK > docs/drvlxc.html.in | 7 +++++++ > 1 files changed, 7 insertions(+), 0 deletions(-) > > diff --git a/docs/drvlxc.html.in b/docs/drvlxc.html.in > index 640968f..8f3a36a 100644 > --- a/docs/drvlxc.html.in > +++ b/docs/drvlxc.html.in > @@ -50,6 +50,13 @@ processes inside containers cannot be securely isolated from host > process without the use of a mandatory access control technology > such as SELinux or AppArmor.</strong> > </p> > +<p> > +<strong>WARNING: If 'net' namespace <i>not</i> enabled for container, > +host OS could be <i>shutdown</i> by executing command like 'reboot' > +inside container.<br/>So make sure 'net' namespace was available and > +set the <privnet/> feature in the XML, or configure virtual NICs. > +Then this issue could be circumvented.</strong> > +</p> > > <h2><a name="init">Default container setup</a></h2> > >
https://www.redhat.com/archives/libvir-list/2013-August/msg01239.html
CC-MAIN-2015-22
en
refinedweb
Sélectionner une langue Blog Red Hat Blog menu. Our prioritization includes these healthcare industry-specific perceived threats: - Data leak prevention - Cyberattack prevention - Insider threat reduction (the "evil admin" problem) - Software supply chain attack prevention There are many other attack vectors, but for this article we’ll be focusing on these four use cases. 1. Data leak prevention Personal Health Information (PHI) is more valuable on the black market than credit card credentials or regular Personally Identifiable Information (PII), so there’s a higher incentive for cyber criminals to target medical databases so they can sell the PHI or use it for their own personal gain. Data leaks can also severely damage an organization’s reputation and decrease trust with customers and other healthcare partners. Collecting and interpreting sensitive patient data is critical for healthcare organizations. Organizations must also protect that data against cybersecurity threats while also complying with the Health Insurance Portability and Accountability Act (HIPAA) and other standards to reduce the risk of data leaks. Embedding security controls throughout the full platform and development cycles helps organizations manage sensitive data and ensure that access is only granted to authorized and authenticated users. Infrastructure security You can consider some of these measures to better secure your infrastructure and prevent data leaks: Ensure unencrypted data isn’t exposed through the base operating system by using self-encrypting disks (SED), encrypted enterprise storage or encrypted block storage provided by an external cloud provider. Red Hat Enterprise Linux CoreOS also supports Linux Unified Key Setup (LUKS) disk encryption secured with Trusted Platform Module (TPM) v2 or Network Bound Disk Encryption (NBDE), which requires a Tang server. The Federal Information Processing Standard Publication 140-2 (FIPS-140-2) is a standard defining a set of security requirements for the use of cryptographic modules. Red Hat OpenShift Data Foundation is now using FIPS validated cryptographic modules as delivered by Red Hat Enterprise Linux OS/CoreOS (RHCOS). The use of a service mesh such as Istio can help provide strong identity, powerful policy, mutual TLS encryption and authentication, authorization and audit (AAA) tools to protect services and data. Istio can also provide certificate management using X.509 certificates. Tampering can lead to elevation of privileges, information loss and damage. Ensuring that the data is not maliciously altered is critical to maintaining data security. While some Red Hat OpenShift components such as container images, etcd, APIServer and Kubernetes configuration files can be tampered with, the platform provides protection for these components out of the box: TLS is the general tool for protecting against tampering of data in transit. All communication to the Red Hat OpenShift APIServer is over TLS. Only the Red Hat OpenShift APIServer is allowed to communicate with etcd. All Red Hat OpenShift components are managed with Kubernetes operators, ensuring that only supported configuration changes are allowed. If an unsupported configuration change is made, the operator for that component will reset the change or mark the component as degraded. Red Hat OpenShift operators help manage configuration drift. The container runtime is protected with Security-Enhanced Linux (SELinux), kernel and network namespaces, Cgroups and, optionally, secure computing mode (seccomp) profiles. Red Hat OpenShift Security Context Constraints (SCCs) give the Red Hat OpenShift administrator the ability to make use of Red Hat Enterprise Linux security features, such as dropping Linux capabilities, and apply those restrictions across pods deployed to the cluster. The restricted SCC is applied to all worker nodes, preventing the running of privileged containers, and preventing access to the host network and filesystem. If it’s necessary to run a privileged pod, a privileged or custom SCC can be applied to the pod. These additional recommendations will further enhance the security of your Red Hat OpenShift cluster data at rest: Restrict access to the control plane by configuring a separate ingress controller Restrict access to repositories with Red Hat OpenShift configuration files Perform bootstrapping over Secure Shell (SSH) and safeguard the SSH keys Restrict access to the image registry and associated repositories Encrypt the etcd datastore Implement a process to review the security context of the pod manifest file to ensure that unnecessary privilege and access is not requested Install and configure the Compliance operator to ensure required compliance controls are implemented Install and configure File Integrity Operator to continually run intrusion detection checks on cluster nodes Red Hat OpenShift implements a principle of least privilege security model, with built-in authentication and authorization. By default, privileged containers cannot run on compute nodes in Red Hat OpenShift, but you still need to be sure that the applications you deploy cannot be spoofed, as this could lead to compromised applications and data. Although the Red Hat OpenShift security model requires its components to authenticate via mutual TLS (mTLS), it’s only as secure as the certificate authority (CA) that issues the certificates. A compromise there could lead to an ineffective mTLS layer. Some recommendations to navigate this include: Only using internal Red Hat OpenShift CAs within the cluster Using custom certificates—one for authentication of internal components and one for external components Setting the automountServiceAccountToken to false for pods that don’t need to communicate with the APIServer Using secrets to mount certificates into pods that can be used for authentication of pod identity Integrating with an external vault to manage application secrets Using the configurations for expirationSeconds and audience if a service token needs to be mounted Application development security DevSecOps is a complex undertaking, especially as DevSecOps tools grow and change at a fast pace. Containers and Kubernetes add more complexity and open up new attack vectors and security risks. Development and operations teams must make security—including Kubernetes security—an integral part of the application life cycle to safeguard critical IT infrastructure and protect confidential health data. To reduce the risk of data leakage, we recommend that you use built-in capabilities within the OpenShift platform and Kubernetes to increase container security. Examples include pod security policies, network traffic controls, cluster ingress and egress controls, role-based access controls (RBACs), integrated certificate management, and network microsegmentation. 2. Cyberattack prevention The U.S. Department of Health and Human Services produces a report that shows security breaches across the U.S. healthcare industry in the last two years. This report shows that almost 73% of security breaches are due to hacking or IT incidents. To reduce the number of cyberattacks, security should be an integral part of the design, implementation and maintenance of any system containing or processing personal health information. Infrastructure security By default, Kubernetes allows pods to talk to each other unimpeded, so network policies should be used as a key security control to prevent an attacker from moving laterally through a container environment. The biggest security gains regarding limiting external attacks come from applying ingress policies, so we recommend focusing on those first, and then adding egress policies. Network policies provide a simple way to add filtering. If a pod is matched by selectors in one or more NetworkPolicy objects, the pod will accept only connections that are allowed by at least one of those NetworkPolicy objects. The following are policy options that can be set to control pod network communications: Deny all traffic. Only allow connections from the OpenShift Container Platform ingress controller. Only accept connections from pods within a project. Only allow HTTPS traffic based on pod labels. Accept connections by using both namespace and pod selectors. Block "front end" pods from directly communicating with database pods—middle tier pods should be used for database connections. For more information on creating ingress policies, please see the Guide to Kubernetes Ingress Network Policies. Application development security To prevent delays in application deployment and to fully realize the benefits of containers and Kubernetes, organizations should "shift left" with security. This means security should be integrated into the development process as early as possible so any security challenges can be addressed well before the build and deployment stages. We recommend that you unify security across your organization using a DevSecOps approach. This extends responsibility for security across teams, rather than having a single, disconnected team responsible for setting security policy. It is worth noting that attackers are less likely to attack infrastructure and they will probably attempt attacks through the user interface or application programming interface (API). For this reason, additional security hardening techniques should be taken into account such as using TLS everywhere (creating a "Zero Trust Architecture") and increased access logging to monitor who has accessed sensitive health data. Additional actions such as image vulnerability scanning will also decrease the risk of vulnerabilities within containers. The ideal solution would be to have mTLS/TLS all the way down from the APIs to the databases. 3. Insider threat reduction (the "evil admin" problem) According to a Techjury Study, 66% of organizations consider malicious insider attacks or accidental breaches more likely than external cyberattacks. Additionally, the number of insider-originating incidents has increased by 47% over the last two years. An example of an "evil admin" within the healthcare industry is that of Jesse William McGraw, aka "GhostExodus". McGraw pleaded guilty in 2010 to computer tampering charges for putting malware on a dozen machines at a hospital in Texas, including a nurses' station that had access to medical records. He was later sentenced to nine years and two months in prison for installing malware on computers. Technology alone cannot remove this type of attack, but it can be used to mitigate attempts and to limit the attack surface. Privileged user accounts are required for various legitimate purposes, but they should be managed and monitored to stop either intentional or accidental breaches in security. Infrastructure security Identity and Access Management (IAM) and RBAC are core features of Red Hat OpenShift. Below, we describe the controls available for administrators, engineers and developers who are using Red Hat OpenShift to restrict access to platform resources. These control descriptions should not be confused with application-level IAM and RBAC, such as those used with a website or web application. Direct access to an application that happens to be running on Red Hat OpenShift is a separate topic and is typically handled by the guest application. RBAC objects in Red Hat OpenShift determine the level of access a user is granted. Cluster administrators can use roles and bindings to control users’ access to resources in Red Hat OpenShift. Authorization is controlled through the following: Red Hat OpenShift creates a cluster administrator, kubeadmin, after the installation process completes. This user has the cluster-admin role automatically applied and is treated as the root user for the cluster. After you define an identity provider and create a new cluster-admin user, you can remove the kubeadmin user to improve cluster security. Linked to RBAC and access controls, maintaining immutable and non-repudiated audit logs is critical. Organizational administrators and authorised external systems which have agreed and authorized reasons for accessing the logs should have the ability to create alerts using these logs and should be accountable for their use and dissemination. These types of controls are beneficial for recording and examining information system activity, especially when determining if a security violation has occurred. When you have the full audit trail, you are able to better make a risk assessment and advise more specific remediation. The U.S. Department of Health and Human Services recommends that organizations: "Implement hardware, software, and/or procedural mechanisms that record and examine activity in information systems that contain or use electronic protected health information." Application development security Adding proper authentication to an application may be cumbersome since correctly implementing standards and interoperable authentication flows, like OpenId Connect (OIDC), can be a challenging task, but these are crucial for increasing security and reducing risk. For any new application using microservices architecture, it’s possible to delegate all the authentication and authorization concerns to a building block such as Red Hat Single Sign-on. Regarding authentication and authorization, it is advised to use OIDC, Open Authentication (OAUTH) or Security Assertion Markup Language (SAML) to protect all resources. 4. Preventing software supply chain attacks The now famous "SolarWinds Hack," (which impacted the National Institutes of Health among other government agencies) is one example of how dangerous software supply chain attacks are. They rely on compromising the software Continuous Integration/Continuous Delivery (CI/CD) process by introducing malicious software into regular software builds, preferably software with a wide reach into "interesting" companies. To defend against this type of attack, a multitude of changes have to be made to internal processes to ensure a non-compromised software build across the whole build chain. Building security into your applications is critical for cloud-native deployments. Securing your containerized applications requires that you: Use trusted container content. Use an enterprise container registry. Control and automate building containers. Integrate security into the application pipeline. The CI/CD pipeline is at the core of a secure software supply chain and helps prevent supply chain attacks because developers remove the vulnerabilities before the application goes into production. Adding automation to the process allows IT teams to deliver resources faster, supporting rapid proofs of concept, development, testing and deployment into production. The bottom line Protecting sensitive data is important in all industries, but is particularly vital in healthcare due to the nature (and value) of personal health information. In this post, we’ve outlined some strategies towards mitigating four high priority types of security threats that healthcare IT teams can use to ensure their infrastructure and application development processes are as secure as possible. Learn more About the author Chris Jenkins is an experienced EMEA based Chief Technologist who provides a broad range of technical and and non-technical skills to enterprise customers.
https://www.redhat.com/fr/blog/openshift-security-hardening-healthcare-industry
CC-MAIN-2022-05
en
refinedweb
AWS News Blog New – Amazon FSx for OpenZFS Last month, my colleague Bill Vass said that we are “slowly adding additional file systems” to Amazon FSx. I’d question Bill’s definition of slow, given that his team has launched Amazon FSx for Lustre, Amazon FSx for Windows File Server, and Amazon FSx for NetApp ONTAP in less than three years. Amazon FSx for OpenZFS Today I am happy to announce Amazon FSx for OpenZFS, the newest addition to the Amazon FSx family. Just like the other members of the family, this new addition lets you use a popular file system without having to deal with hardware provisioning, software configuration, patching, backups, and the like. You can create a file system in minutes and begin to enjoy the benefits of OpenZFS right away: transparent compression, continuous integrity verification, snapshots, and copy-on-write. Even better, you get all of these benefits without having to develop the specialized expertise that has traditionally been needed to set up and administer OpenZFS. FSx for OpenZFS is powered by the AWS Graviton family processors and AWS SRD (Scalable Reliable Datagram) Networking, and can deliver up to 1 million IOPS with latencies of 100-200 microseconds, along with up to 4 GB/second of uncompressed throughput, up to 12 GB/second of compressed throughput, and up to 12.5 GB/second throughput to cached data. FSx for OpenZFS supports the OpenZFS Adaptive Replacement Cache (ARC) and uses memory in the file server to provide faster performance. It also supports advanced NFS performance features such as session trunking and NFS delegation, allowing you to get very high throughput and IOPS from a single client, while still safely caching frequently accessed data on the client side. FSx for OpenZFS volumes can be accessed from cloud or on-premises Linux, MacOS, and Windows clients via industry-standard NFS protocols (v3, v4, v4.1, and v4.2). Cloud clients can be Amazon Elastic Compute Cloud (Amazon EC2) instances, Amazon Elastic Container Service (Amazon ECS) and Amazon Elastic Kubernetes Service (EKS) clusters, Amazon WorkSpaces virtual desktops, and VMware Cloud on AWS. Your data is stored in encrypted form and replicated within an AWS Availability Zone, with components replaced automatically and transparently as necessary. You can use FSx for OpenZFS to address your highly demanding machine learning, EDA (Electronic Design Automation), media processing, financial analytics, code repository, DevOps, and web content management workloads. With performance that is close to local storage, FSx for OpenZFS is great for these and other latency-sensitive workloads that manipulate and sequentially access many small files. Finally, because you can create, mount, use, and delete file systems as needed, you can now use OpenZFS in a dynamic, agile fashion. Using Amazon FSx for OpenZFS I can create an OpenZFS file system using the AWS Management Console, CLI, APIs, or AWS CloudFormation. From the FSx Console I click Create file system and choose Amazon FSx for OpenZFS: I can choose Quick create (and use recommended best-practice configurations), or Standard create (and set all of the configuration options myself). I’ll take the easy route and use the recommended best practices to get started. I enter a name (Jeff-OpenZFS) select the amount of SSD storage that I need, choose a VPC & subnet, and click Next: The console shows me that I can edit many of the attributes of my file system later if necessary. I review the settings and click Create file system: My file system is ready within a minute or two, and I click Attach to get the proper commands to mount it to my client: To be more precise, I am mounting the root volume (/fsx) of my file system. Once it is mounted, I can use it as I would any other file system. After I add some files to it, I can use the Action menu in the console to create a backup: I can restore the backup to a new file system: As I noted earlier, each file system can deliver up to 4 gigabytes per second of throughput for uncompressed data. I can look at total throughput and other metrics in the console: I can set throughput capacity of each volume when I create it, and then change it later if necessary: Changes take effect within minutes. The file system remains active and mounted while the change is put into effect, but some operations may pause momentarily: A single OpenZFS file system can contain multiple volumes, each with separate quotas (overall volume storage, per-user storage, and per-group storage) and compression settings. When I use the quick create option a root volume named fsx is created for me; I can click Create volume to create more volumes at any time: The new volume exists within the namespace hierarchy of the parent, and can be mounted separately or accessed from the parent. Things to Know Here are a couple of quick facts and to wrap up this post: Pricing – Pricing is based on the provisioned storage capacity, throughput, and IOPS. Regions – Amazon FSx for OpenZFS is available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Canada (Central), Asia Pacific (Tokyo), and Europe (Frankfurt) Regions. In the Works – We are working on additional features including storage scaling, IOPS scaling, a high availability option and another storage class. Now Available Amazon FSx for OpenZFS is available now and you can start using it today!
https://aws.amazon.com/de/blogs/aws/new-amazon-fsx-for-openzfs/
CC-MAIN-2022-05
en
refinedweb
Technical Articles Deployment of Python Web Server to Cloud Foundry using MTA This blog shows how you can deploy a hello world example to the Cloud Foundry stack using a Multi Target Application MTA deployment process. I am also extending the example to show how to use the same framework to deploy run JupyterLab / Notebook on the CloudFoundry stack. The idea is to use the Python code module/microservice as part of a larger deployment. Hence this example will show the deployment using a mta.yaml file. - To start out with we will create a folder to house the necessary files called ‘py-service’ - The core of the functionality is a web server based on the Python Flask framework (). - To model the service, create a file in the py-service directory called ‘first-service.py’ and poppulate it using the following code snippet: from flask import Flask import os app = Flask(__name__) # Port number is required to fetch from env variable # cf_port = os.getenv("PORT") # Only get method by default @app.route('/') def hello(): return 'Hello World' if __name__ == '__main__': if cf_port is None: app.run(host='0.0.0.0', port=5000, debug=True) else: app.run(host='0.0.0.0', port=int(cf_port), debug=True) - The PORT variable is set by the cloud foundry environment and used to expose the application to the outside world - The next step is to create a file called ‘Procfile’. This file identifies the starting point of the application. It contains a single line web: python first-service.py. This line provides the command that is executed. the ‘web:’ specifies that it will provide a web server. - To tell the Cloud Foundry Buildpack what Python version should be used, you need to create a file called ‘runtime.txt’. This file contains a simple line: python-3.9. You can find the available realeases in here () - The packages/dependenies can be specified in an environment.yml or requirements.txt file - Some useful information can be found here: - In my case I consolidated all information in an environment.yml file name: riz-inno-py-cf-env channels: - conda-forge dependencies: - pip ### to specfy additional pop based packages you can specify the following ###- pip: ### - docx ### - gooey ### - - pytest #@ if you want to specify a version, you can use - Flask==1.0.2 - flask - numpy - matplotlib - The last part is a mta.yaml file just like you would create it for other multi target applications. - !!!The memory allocation for this example is tremedous (2gb for memory and at 3gb for disk) I have not done any digging where the consumption originates, but adding the conda package manager seems to take up a lot of overhead. - You can find more details about available buildpacks in the SAP Help and on the Cloud Foundry page - This is the mta.yaml code: ID: riz.inno.py-cf _schema-version: '3.1' version: 1.0.0 modules: - name: riz-inno-py-cf type: python path: py-service/ parameters: memory: 2000M # Specifying theses quotas is important as they specify how much of the assigned entitlement is assigned once the module is started disk-quota: 3000M - To build and deploy the application execute mbt build cf deploy .\mta_archives\riz.inno.py-cf_1.0.0.mtar - Be aware: Staging the application takes a fairly long time. - To view the progress use a different terminal window and specfy: cf logs riz-inno-py-cf - At the end of the deployment the deployment process will show the URL for the deployed component. Open the URL in a browser and you should receive the text ‘Hello World’ in the browser window. Run Jupyter Notebook in the Cloud Foundry stackRun Jupyter Notebook in the Cloud Foundry stack - Add - jupyterlabto the end of the ‘environment.yml’ file - Change ‘Procfile’ to have the following content web: jupyter lab --ip 0.0.0.0 --port $PORT --no-browser - To open Jupyter you need two pieces - the host – Execute cf app riz-inno-py-cf– under ‘routes:’ you can find the host name. - The url/token. – Execute cf logs riz-inno-py-cf --recent– copy the path including the token. Something like ‘/lab?token=131a072f38600b24698ea25c14431f1383054d542ce26117’ - Combine the host name with path and token to open up jupyter lab – Example URL: https://[host]/lab?token=131a072f38600b24698ea25c14431f1383054d542ce26117 Conclusion I hope this post provided you a good overview of how you can run python apps in the Business Technology Platform Cloud Foundry stack. Please share your feedback and thoughts in the comment section below. You can find more relevant content by following the tag “Python” Be the first to leave a comment You must be Logged on to comment or reply to a post.
https://blogs.sap.com/2021/04/20/deployment-of-python-web-server-to-cloud-foundry-using-mta/
CC-MAIN-2022-05
en
refinedweb
Seleccionar idioma Blog de Red Hat Blog menu We are happy to: highest energy efficiency1 highest space efficiency2 highest throughput3 fastest warm & cold start times in the large Greeks benchmark4 fastest warm time in the baseline Greeks benchmark5 highest maximum paths6 highest maximum assets7 Below is the high-level architectural diagram of the benchmark configuration that was used. Below, I’ve provided an overview of the technologies and best practices needed to run this STAC-A2 workload. These practices should be generally applicable to running any compute-intensive applications, including those that use Monte Carlo simulation, and need to take advantage of powerful NVIDIA GPUs, such as those featured in the NVIDIA DGX A100 used to run these record-breaking benchmarks. First, some background on the STAC-A2 The STAC-A2 Benchmark suite is the industry standard for testing technology stacks used for compute-intensive analytic workloads involved in pricing and risk management. This benchmark is tightly linked to what is known as Monte Carlo simulation, which has numerous applications in finance and scientific fields. Red Hat is an active STAC member both supporting the performance tuning and execution of vendor benchmarks, as well as contributing artifacts to simplify and modernize the execution for cloud-native platforms, such as Red Hat OpenShift. Running high performance containerized workloads on Red Hat OpenShift already has delivered impressive performance results. The Red Hat OpenShift Performance Sensitive Applications (PSAP) team that I’m on has done a lot of work to simplify and enable the use of accelerator cards and special devices on Red Hat OpenShift. There are three available Red Hat OpenShift operators that simplify setup and configuration of devices that require out-of-tree drivers as well as other special purpose devices: the Node Feature Discovery Operator (NFD), the NVIDIA GPU Operator and the Node Tuning Operator (NTO). Conveniently, NTO comes with Red Hat OpenShift and the other two operators can be found in and installed via OperatorHub from within Red Hat OpenShift. The Node Feature Discovery Operator The NFD is the prerequisite to exposing node-level information to the kubelet. NFD manages the detection of hardware features and configuration in an Red Hat OpenShift cluster by labeling the nodes with hardware-specific information (PCI devices, CPU features, kernel and OS information, etc.). With this information available to the Red Hat OpenShift cluster, other operators can work off this information to enable the next layer of functionality. Here are instructions for setting up NFD operators. The NVIDIA GPU Operator In our configuration, the next layer to be installed is the NVIDIA GPU Operator, which brings the necessary drivers to the nodes that have NVIDIA GPU devices. The PSAP team also created the Special Resource Operator (SRO) pattern as a basis to enable out of tree driver support in Red Hat OpenShift and to expose these special resources to the Red Hat OpenShift scheduler. The NVIDIA GPU Operator is based off of this (SRO) pattern, and we worked alongside NVIDIA to co-develop this operator. The GPU Operator release 1.8 was the first to support the NVIDIA DGX A100, the system under test (SUT) for this benchmark. Once the GPU Operator is installed, the kernel modules are built and loaded for the nodes with the devices and the kubelet gains even more information regarding the GPU devices on the applicable nodes. This extra information allows the scheduler to see these new special resources and schedule them as you traditionally would with CPU or memory, by requesting the desired resources per pod. Follow these installation instructions to set up NVIDIA GPU Operator. The Node Tuning Operator The final operator in this stack is the NTO, which comes pre-installed in any standard Red Hat OpenShift 4 installation. The NTO helps manage node-level tuning by orchestrating the TuneD daemon. The NTO provides a unified management interface to users of node-level sysctls via TuneD profiles and also more flexibility to add custom tuning specified by user needs. Given that we’re trying to optimize the system performance for our specific hardware accelerator needs, the NTO makes it very easy to set a pre-existing profile for this specific purpose, i.e., "accelerator-performance." Applying the profile To apply the "accelerator-performance" profile to the DGX A100 node: Find label to match node such as nvidia.com/gpu.present Create tuned profile: apiVersion: tuned.Red Hat OpenShift.io/v1 kind: Tuned metadata: name: Red Hat OpenShift-node-accelerator-performance namespace: Red Hat OpenShift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom Red Hat OpenShift node profile for accelerator workloads include=Red Hat OpenShift-node,accelerator-performance name: Red Hat OpenShift-node-accelerator-performance recommend: - match: - label: nvidia.com/gpu.present priority: 20 profile: Red Hat OpenShift-node-accelerator-performance Verify the profile has been applied to the desired node (that has gpu.present label): $ oc get profile NAME TUNED APPLIED DEGRADED AGE master01 Red Hat OpenShift-control-plane True False 4d worker01 Red Hat OpenShift-node True False 4d master02 Red Hat OpenShift-control-plane True False 4d worker02 Red Hat OpenShift-node True False 4d master03 Red Hat OpenShift-control-plane True False 4d dgxa100 Red Hat OpenShift-node-accelerator-performance True False 4d Now that we have the Red Hat OpenShift cluster ready to run GPU-enabled applications, the most difficult component is likely to be the containerization of your actual workload. Fortunately, if you have experience with creating containers outside of Red Hat OpenShift, the same concepts apply here. Given that we are trying to use a CUDA implementation of STAC-A2, we used a base image from the NVIDIA NGC image repository: nvcr.io/nvidia/cuda11.2.0-devel-ubi8. This container has the CUDA 11.2 development libraries and binaries running on a Red Hat Universal Base Image (UBI) 8, and it provides a preferred and tested environment right out of the box. The STAC benchmarks are a great example of traditional Linux applications that were not designed with containerization in mind. They have multiple components and dependencies to each benchmark suite such as input data generation, implementation compilation, quality assurance binaries, chart generation and a variety of shell script customizations. However, it is still possible to successfully containerize these benchmarks and build them in an automated fashion. Red Hat contributes these example artifacts to the STAC Vault, where STAC member companies can find resources to either build and run the benchmark in their own environment or as a starting point for containerization of their workloads. STAC member companies can take a look at the Red Hat OpenShift for STAC-A2 repo in the STAC Vault to find these artifacts. Now all that remains is deploying our containerized GPU/CUDA workload to the cluster. Here is the yaml necessary to deploy the NVIDIA STAC-A2 container on Red Hat OpenShift: apiVersion: v1 kind: Namespace metadata: labels: stac: a2-nvidia name: stac --- apiVersion: v1 kind: Pod metadata: labels: app: stac-a2-nvidia name: stac-a2-nvidia namespace: stac spec: tolerations: - key: nvidia.com/gpu operator: Exists effect: NoSchedule restartPolicy: Never containers: - name: stac-a2-nvidia image: "quay.io/sejug/stac-a2-nv:dgxa100-cuda1120-40" command: ['sh', '-c', 'entrypoint.sh'] imagePullPolicy: Always securityContext: privileged: true env: - name: NVIDIA_VISIBLE_DEVICES value: all resources: limits: nvidia.com/gpu: 8 # requesting 8 GPUs nodeSelector: node-role.kubernetes.io/worker: "" We create a new namespace stac that will be populated with a single pod. We have specified that this pod be created on a worker node with a nvidia.com/gpu. Since we’re deploying this on a NVIDIA DGX A100, we’ve asked for a node that has eight GPUs available and we’ve asked that all the devices are visible to the container. It is not generally necessary to make the pod privileged in order to run GPU workloads, as we have done in the yaml above. However, in our test execution we use nvidia-smi apply some device-level tuning, which does require these privileges. With the advancements in Red Hat OpenShift enabled by operators such as the NVIDIA GPU Operator, it’s now easier than ever to make the most of GPUs/special hardware in heterogeneous cluster environments. The convenience of workload deployment and management provided by Red Hat OpenShift has been repeatedly demonstrated in real-world deployments. Benchmarks bring into light the overall performance of the entire solution. This latest record-breaking STAC-A2 benchmark result produced in conjunction with NVIDIA DGX systems is another product of our joint focus on performance. About the author Sebastian Jug, a Senior Performance and Scalability Engineer, has been working on OpenShift Performance at Red Hat since 2016. He is a software engineer and Red Hat Certified Engineer with experience in enabling Performance Sensitive Applications with devices such as GPUs and NICs. His focus is in automating, qualifying and tuning the performance of distributed systems. He has been a speaker at a number of industry conferences such as Kubecon and STAC Global.
https://www.redhat.com/es/blog/red-hat-collaborates-nvidia-deliver-record-breaking-stac-a2-market-risk-benchmark
CC-MAIN-2022-05
en
refinedweb
Write a Python program to generate a random number (float) between 0 and n. To work with the following functions, we have to import random module. Remember, the outputs shown below may be different from what you get. Because these python number functions generate random numbers every time you call. Python random number between 0 and 1 The random() function generates a number between 0 and 1, and the data type will be float. So the below python number generator example returns a random floating point numbers from 0 to 1. import random rnum = random.random() print(rnum) 0.9625965525945374 Python random integer in a range The randint() function takes two arguments. First argument is the start value, second argument is the stop value. Here, the start is 0 and the stop is n. If you want to generate number between a start value and some other value that is not a stop, then use the below code. For instance, the below code returns the number between 10 and 100. import random as rnd rnum = rnd.randint(10, 100) print(rnum) 70 The randrange() function is really helpful when you need to pick from a range of integers. If you pass in a step value, it will only pick a value from the set of integers by skipping over that many numbers. In the last statement, we pass in an integer after the step val. import random as rd rnum1 = rd.randrange(10) print(rnum1) rnum2 = rd.randrange(5, 95) print(rnum2) rnum3 = rd.randrange(10, 200, 2) print(rnum3) 2 61 186 If we use the above functions with the combination of for loop, it is beneficial to test or simulate with fake data. By default, the range function skips one number at each step. Thus, we can see that the range doesn’t require a step argument. If you want to skip other. Because, this tells the for loop to generate an integer for each iteration. The below generates the numbers between 100 and 200. We also added an extra print statement to print the numbers generated at each for loop iteration. By this python random number generator example, you can understand it better. import random as rd rndList = [] for i in range(1, 11): rnum = rd.randint(10, 100) rndList.append(rnum) print(rnum) print(rndList) 16 23 72 51 63 78 39 47 80 46 [16, 23, 72, 51, 63, 78, 39, 47, 80, 46] The following two examples, we haven’t used the import package line. In order to text the second and third example, add the import line from the above. Within the second example, we used the Python randrange function along with for loop. rndList = [] for i in range(0, 8): rnum = rd.randrange(5, 95) rndList.append(rnum) print(rnum) print(rndList) 70 62 58 53 44 60 79 73 [70, 62, 58, 53, 44, 60, 79, 73] In this third example, we used it within the for loop to print five floating point numbers between 0 and 1. rndList = [] for i in range(1, 11): rnum = rd.randint(10, 100) rndList.append(rnum) print(i, " = ", rnum) print(rndList) 1 = 46 2 = 28 3 = 95 4 = 53 5 = 55 6 = 68 7 = 70 8 = 94 9 = 65 10 = 95 [46, 28, 95, 53, 55, 68, 70, 94, 65, 95] Python random number between 1 and 10 The Python random module includes a sample() that allows you to select one or more elements from a list or a tuple. You can use the choice function to select one element from a sequence. To use this, pass in a sequence, along with a sample size (how many elements to sample). It will return a section of elements from the sequence. For example, the below program returns eight numbers between 1 and 10. import random as rnd rndList = rnd.sample(range(1, 10), 8) print(rndList) [2, 4, 1, 5, 7, 8, 6, 3] Here, we allow users to enter the start and stop values and generates the number between those values using different functions. Here, the code might be slightly different from the image because we removed the extra text in it to keep the code simple. import random as rd s = int(input("Please enter the Starting Value = ")) e = int(input("Please enter the Ending Value = ")) rnum1 = rd.randint(s, e) print("using randint = ", rnum1) rnum2 = rd.randrange(s, e) print("using randrange = ", rnum2) rndList = rd.sample(range(s, e), 7) print("List using sample = ", rndList) rndList1 = [] rndList2 = [] for i in range(0, 7): rnum3 = rd.randint(s, e) rndList1.append(rnum3) rnum4 = rd.randrange(s, e) rndList2.append(rnum4) print("List using randint = ", rndList1) print("List using randrange = ", rndList2)
https://www.tutorialgateway.org/python-random-number-generator/
CC-MAIN-2022-05
en
refinedweb
- 3 Authors - 3.1 Getting started - 3.2 Papers - 3.3 Slides - 3.4 Theses - 3.5 Books - 3.6 Adding/modifying LATEX class files - 3.7 Using the default LATEX class files Command line interface Madagascar was designed initially to be used from the command line. Programmers create Madagascar programs (prefixed with the sf designation) that read and write Madagascar files. These programs are designed to be as general as possible, so that each one could operate < file.rsf < file.rsf > file2.rsf gives us two files, file.rsf and file2.rsf which are identical but not the same file. If your intention is simply to copy a file, you can also use sfcp .) types of values that are acceptable: integers, floating point numbers, booleans, or strings. Going back to the window program, we can specify the number of traces or pieces of the file that we want to keep like: sfwindow < file.rsf n1=10 > file to bring up a program's self-documentation. For example, sfwindow 's self documentation is: bei/ft1/brad bei/ft1/ft2d bei/krch/sep73 bei/sg/denmark bei/sg/toldi bei/trimo/mod bei/trimo/subsamp bei/vela/stretch bei/vela/vscan bei/wvs/head bei/wvs/vscan cwp/geo2006TimeShiftImagingCondition/flat cwp/geo2006TimeShiftImagingCondition/icomp cwp/geo2006TimeShiftImagingCondition/zicig cwp/geo2007StereographicImagingCondition/flat4 cwp/geo2007StereographicImagingCondition/gaus1 167 more examples listed in: /usr/local/RSFROOT/share/doc/madagascar/html/sfwindow.html SOURCE system/main/window.c DOCUMENTATION VERSION 1.3-svn The self-documentation tells us the function of the program, as well as the parameters that are available to be specified. The parameter format is type - name=default value [options] and then a short description of the parameter. File parameters request a name of a file. For example: file=file.rsf Note: strings with spaces must be enclosed in quotation marks (e.g. 'my value') so that the Unix shell could interpret them correctly. < file.rsf > file-win.rsf sftransp < file-win.rsf > file2.rsf Or we could do the equivalent using pipes on one line: sfwindow < file.rsf | sftransp > file < file.rsf | sftransp | sfnoise var=1 > file2.rsf If you're using multiple programs, and do not want to save the intermediate files, then pipes will greatly reduce the number of files that you have to keep track of. Interacting with files from the command line Ultimately though, 95% of your time using Madagascar on the command line will be to inspect and view files that are output by your scripts. Some of the key commands that are used to interact with files on the command line are: - file.rsf file.rsf: in="/var/tmp/file < file file.rsf, we might use the following command: sfwindow < file.rsf f1=15 n1=15 j1=1 > file-win.rsf sftransp is used to reorder RSF files for other programs to be used. For example: sftransp < file.rsf plane=12 > file-transposed.rsf swaps the first and second axes, so that now the first axis is distance and the second axis is time. For more information about commonly used Madagascar programs please see the guide to Madagascar programs. Plotting VPLOT provides a method for making plots that are small in size, aesthetically pleasing, and easily compatible with Latex for rapid creation of productio-quality images in Madagascar. VPLOT The VPLOT file format (.vpl suffix) is a self-contained binary data format that describes in vector format how to draw a plot on the screen or a page using an interpreter. Since VPLOT is not a standard imaging format, VPLOT files must be viewed with interpreter programs which, for historical reasons, are called pens. Each pen interfaces VPLOT with a third-party graphing library such as X11, plplot, opengl, and others. This flexibility makes VPLOT files almost as portable as standard image formats such as: EPS, GIF, JPEG, PDF, PNG, SVG, and TIFF.. The following table lists all of the available VPLOT filters. To actually create a plot, we can use the plotting programs on the command line in the same fashion that we would use a Madagascar program: sfspike n1=100 | sfnoise > file.rsf sfgraph < file.rsf file.vpl Visualizing plots In this example, we create a single trace full of noise and then send it to sfgraph to produce a single VPLOT file, file < file file.vpl format=jpeg NOTE: details on how to install these third-party libraries are not included with the Madagascar library, and we provide no support on installing them. Most users will be able to install them using either package management software (on Linux, Windows, and Mac) or pre-compiled binaries. The PS and EPS support is provided by default. Scripting Madagascar Madagascar's command line interface allows us to create, manipulate and plot data. such as SU, because it is the primary method of scripting there. An example of a Madagascar shell script is shown below: <bash> - !/bin/bash sfspike n1=100 > file.rsf < file.rsf sfgraph > file.vpl </bash>% Python. As a build manager, SCons keeps track of three items for each file built in the script: the name of the output file(s), the name of the input file(s), and the rule (Madagascar program(s) with options) used to build the output file(s) from the input file(s). The advantage to keeping track of these items is that SCons can then check to see if any of those items have changed and if so mark the output file to be rebuilt. If no changes are detected, and the output file already exists, then SCons skips the output file and goes onto building other files. This gives the user the ability to avoid re-running portions of their scripts that previously completed without issue, which is very important when working on processing flows with large datasets where individual commands may take hours to execute. SCons also tracks dependencies between various processing commands. For example, if command 2 depends on the ouptut of command 1, and the input to command 1 changes, then SCons will automatically know that the input to command 2 has changed and will re-run command 2 as well. Furthermore, SCons allows us to run multiple processing commands at the same time on computers with sufficient capabilities, further reducing the amount of time that it takes for us to process data in Madagascar. Now, we'll discuss how to create SCons scripts and use SCons on the command line. SConstructs and commands SCons scripts are called: <python> "this is a string" 'this is a string' """this is a string""" "'this is a string"' </python> Somtimes in Python you will need to nest a string within a string. To do use one of the string representations for the outer string, and use a different one for the inner string. For example: <python> """sfgraph title="my plot" """ OR sfgraph title="my plot" OR ' sfgraph title="my plot" ' </python> Fundamentally, Madagascar's data-processing SConstructs are composed of only four commands: Fetch , Flow , Plot and Result . The main: <python> Flow("spike1","spike","scale dscale=4.0") </python>scale dscale: <python> Plot("spike1","sfgraph pclip=100") </python>: <python> End() </python> This statement tells SCons that the script is done and that it should not look for anything else in the script. Make sure to include this statement as the very last item in every SConstruct. Here's a sample SConstruct: <python>: <python>: <python> novice users will see. For additional information about debugging SConstructs, or for exceptionally strange errors please consult the online documentation or the RSF users mailing list. Additional SCons commands Here are some additional SCons commands that are useful to know. Viewing Plots If you're running an SConstruct and want to view the plots that it generates as it creates them, then you can use: scons view to force SCons to show the plots interactively. Doing so allows you to view your output at runtime and flow, even more powerful.: < file.rsf sfcat axis=2 file1.rsf file2.rsf file3.rsf file4.rsf > catted.rsf: <python> Flow('catted',['file','file1','file2','file3','file4'],'sfcat axis=2 ${SOURCES[1:-1]}') </python> or, equivalently, <python> Flow('catted','file file1 file2 file3 file4','sfcat axis=2 ${SOURCES[1:-1]}') </python>: <python> - Correct from rsf.proj import * Flow('file',None,'spike n1=100') Flow('file1',None,'spike n1=100 mag=2') Flow('file2',None,'spike n1=100 mag=3') Flow('file3',None,'spike n1=100 mag=4') Flow('file4',None,'spike n1=100 mag=5') Flow('catted',['file','file1','file2','file3','file4'], 'sfcat axis=2 ${SOURCES[1:-1]}') End() </python> <python> - Wrong from rsf.proj import * Flow('file',None,'spike n1=100') Flow('file1',None,'spike n1=100') Flow('file2',None,'spike n1=100') Flow('file3',None,'spike n1=100') Flow('file4',None,'spike n1=100') Flow('catted',['file','file1','file2','file3','file4'],'sfcat axis=2') End() </python>: <python> Flow('catted',['file','file1','file2','file3','file4'], sfcat axis=2 ${SOURCES[1]} ${SOURCES[2]} ${SOURCES[3]} ${SOURCES[4]} ) Flow('catted',['file','file1','file2','file3','file4'], sfcat axis=2 ${SOURCES[1:5]} ) Flow('catted',['file','file1','file2','file3','file4'], sfcat axis=2 ${SOURCES[1:-1]} ) </python>: <python> Flow(['pef','lag'], 'dat', 'sflpef lag=${TARGETS[1]}'). </python>: <python> Flow('spike',None,'sfspike n1=100') </python> Toggling standard input and standard output When None inputs are used, then standard input is no longer needed and can be disabled. To turn off standard input on a Flow , add another argument to the Flow statement: <python> Flow('spike',None,'sfspike n1=100',stdin=0) </python>: <python> Flow('spike',None,'sfspike n1=100',stdout=-1) </python> OR to redirect to /dev/null: <python> Flow('spike',None,'sfspike n1=100',stdout=0) </python>: <python> Plot('output','input','sfgraph') </python> This Plot command will produce output.vpl instead of input.vpl. In this way, you can create multiple visualizations of the same file. This applies to Result commands as well.: <python> nx = 100 nz = 100 Flow('model',None,'sfspike n1='+str(nx)+'n2='+str(nz)) </python>: <python> nx = 100 nz = 100 Flow('model',None, sfspike n1=%d n2=%d % (nx,nz) ) </python>: <python> parameters = dict(nx=100,nz=100,verb=True,final_file='output123') </python> To access variables from within the dictionary, we use list-like indexing where the index given is the name of the variable that we want to access: <python> nx = parameters['nx'] # Returns 100 </python> We can also set variables within the dictionary, or modify their values after the initial declaration: <python> parameters['nx'] = 200 # Sets nx to 200 parameters['ny'] = 150 # Adds ny, and sets it to 150 </python> To use the dictionary for string substitution, we only need to modify our formatters to include the key names of the variables that we wish to access from the dictionary. For example: <python> Flow('model',None, sfspike n1=%(nx)d n2=%(nz)d % parameters ) </python> Notice that the formatters now have the name of the variable inside parentheses: %(nx)d before the formatting expression. Then, the entire dictionary is passed to the string for substitution. At runtime, Python places the values for the keys from the dictionary into the string. If the values are the wrong type, or the key does not exist in the dictionary, then Python will throw an error at runtime, and prevent you from running with a bad value.: <python> from rsf.proj import * for i in range(10): count = str(i) # Convert integer to string Flow('spike-'+count,None,'sfspike n1=100 mag='+count) Plot('spike-'+count,'sfgraph') End() </python>: <python> def ricker(out='wave',freq=20.0,kt=100,nt=1001,dt=0.01,ot=0.0,**kwargs): Flow(out,None, spike n1=%(nt)d o1=%(ot)f d1=%(d1)f nsp=1 mag=1.0 k1=%(kt)d l1=%(kt)d | ricker1 frequency=%(freq)f % (locals()) </python>. <python> ricker('wave',freq=30,kt=50) </python> If you are using a dictionary that has all of your variables in it, then you can call the function using explicit parameter fetching: <python> ricker('wave',parameters['freq'],parameters['kt'],...) </python> where you have to explicitly grab certain variables from the parameter dictionary. Conversely, you can use automatic parameter fetching: <python> ricker('wave',**parameters) </python>. Commonly used: <python> import myutil myutil.ricker(...) </python>. Authors The Authors tutorials demonstrate how one can create reproducible documents using the Madagascar processing package and LATEX together. By the end of the Authors tutorials, you should be able to: - build papers, including: SEG and EAGE abstracts, manuscripts for Geophysics, and handouts, - build a CSM thesis, - build a CWP report, - build slides, - and add/modify Latex class files to add your custom document formats to Madagascar. After completing this tutorial series, you will be able to maximize your research productivity using Madagascar. ewcommand{\maindir}{$RSFSRC/book/rsf/tutorial} ewcommand{\exampledir}{\maindir/authors} Getting started Before you can get started writing reproducible documents, you need to ensure that your system is properly setup. This section of the tutorial will walk you through the setup process, which can be somewhat difficult and laborious depending on which operating system you are on, as you will need a full installation of LATEX and additional LATEX class files that Madagascar makes available to you. The good news is that this configuration only happens once. Downloading LATEX To begin, you need to download a full installation of LATEX for your operating system. To do so, you may need to contact your system administrator. If you have administrative rights, then you can download a full install for your platform from the following locations: - Linux - use your package management software to install a full texlive (you may need additional packages depending on your distribution). - Mac - install MacTex. - Windows - install MikTex. The respective downloads typically are quite large and take a substantial amount of time to complete, so start the download and come back later. Once your download is done, test your installation by executing pdflatex at the command line. If everything works fine then continue onwards. Downloading SEGTeX The next step is to download the SEG\TeX class files, which tells LATEX how to build certain documents that we commonly use. To download SEG\TeX first cd to a directory where LATEX can find additional class files. This directory is typically operating system dependent, and may be distribution dependent for Linux. Typically, you can place these files in $HOME/texmf . Otherwise, you will need to place them in the root installation for Latex which can be found by searching for the basic class files, such as article.cls. On a Mac the typical place to put these files is $HOME/Library/texmf . In any case, once you are in the proper location, then use the following command to download the class files (using subversion): svn co texmf or download a stable release from and unpack it into the texmf directory. Updating your LATEX install Once the class files are successfully downloaded, you will need to run texhash or texconfig rehash to update LATEX about the new class files. For reference, a successful run of texhash produces the following output: jgodwin$ texhash. To determine if these files are found successfully, go into $RSFROOT/book/rsf/manual and run scons manual.read . If LATEX complains about being unable to find the class files then you should re-run texhash, or make sure that your class files are in the proper location. If you are still having difficulties, then contact the SEGTeX webpage or the user mailing list for further information. Papers With LATEX installed, we can now create reproducible documents using Madagascar. First, we will demonstrate how to build shorter, less complicated documents using Madagascar, such as SEG/EAGE abstracts, Geophysics articles, and handouts. All of these papers have similar build styles, so the rules for building each respective paper have only slight differences from one another. Instead of talking in detail about each of these documents, we illustrate the basic idea for each of the documents, and provide examples that demonstrate the particulars for each type of document. Paper organization All Madagascar papers expect a specific organization to your directories. In particular, you are expected to have a paper-level directory where your tex files and main SConstruct will exist. These files will tell Madagascar how to build your documents for a particular project. You can have multiple documents built from the same location, using the same SConstruct as we will demonstrate later. Below the paper directory, are the individual "chapters" that contain the processing flows used to generate the plots or process the data that you wish to use in your reproducible documents. Ideally, each "chapter" directory correlates to the processing flows or examples in each chapter or section for your paper. Additionally, each "chapter" contains its own SConstruct that operates independently of the paper SConstruct one level above it. Furthermore, inside the "chapter" folder, Madagascar needs to have a Fig folder that contains all of the VPLOT files that were created using Result commands during processing. This folder is automatically created during processing using SCons, so you don't need to manually create it. It is important to note that Madagascar can only locate VPLOT files that are in this file hierarchy for use in your papers. The figure below illustrates the folder hierarchy as well. Note: "chapter" folders may be symbolic links that point to folders elsewhere on the file system. This trick can be useful to reuse figures without copying files and folders around to various folders. If you use symlinks, make sure to avoid editing files that are symbolically linked, as your changes may propagate in unintended ways to other projects and papers. \setlength{\unitlength}{1in} �egin{figure} �egin{picture}(5,4)(0,1) \put(0,4.5){\makebox(1.0,0.5)[c]{Folder hierarchy}} \put(3.0,4.5){\makebox(1.0,0.5)[c]{Contents}} \put(0,4){�ramebox(0.75,0.5)[c]{Book}} \put(2,4){�ramebox(3.0,0.5)[c]{Book SConstruct}} \put(0.35,4){�ector(0,-1){0.5}} \put(0.75,4.25){�ector(1,0){1.25}} \put(0,3){�ramebox(0.75,0.5)[c]{Chapter}} \put(2,3){�ramebox(3.0,0.5)[c]{Paper SConstruct, TeX files}} \put(0.35,3){�ector(0,-1){0.5}} \put(0.75,3.25){�ector(1,0){1.25}} \put(0,2){�ramebox(0.75,0.5)[c]{Project}} \put(2,2){�ramebox(3.0,0.5)[c]{Processing SConstruct, data, RSF files}} \put(0.35,2){�ector(0,-1){0.5}} \put(0.75,2.25){�ector(1,0){1.25}} \put(0,1){�ramebox(0.75,0.5)[c]{Fig}} \put(2,1){�ramebox(3.0,0.5)[c]{VPLOT files, Results ONLY}} \put(0.75,1.25){�ector(1,0){1.25}} \end{picture} \caption{The organizational hierarchy for Madagascar paper directories.} (fig:paperhierarchy) \end{figure} Locking figures Once you have created the necessary folder hierarchy with your "chapters" and processing flows, then go ahead and run your processing SConstructs. After those are finished, you need to lock your figures using scons lock . scons lock tells Madagascar that the figures you have generated are ready to go into a publication, and will store them in a subfolder of the $RSFFIGS directory for safe keeping. Locked figures are used for document figures instead of the figures in your local directory, because they are locked and not still changing. If you change your plots but do not lock your figures, you will not see your figures change. Always make sure to lock your figures before building a document. Tex files Now that your figures are locked, you can create your first reproducible document in Madagascar. To do so, you will need to: - make your tex files, and - make a paper SConstruct, Before making a document, you need to create your TeX files in the paper level directory. For example, to create an EAGE abstract, you would create a main TeX file called: eageabs.tex which contains the content and TeX commands to build your abstract. Your TeX file can use all of the standard and expanded LATEX commands provided by any available packages on your system. It's important to remember that you should try and break apart your TeX files into manageable chunks, so that you can modify them independently, or reuse the content in other documents. For example, instead of having a single TeX file for your EAGE abstract, you could have a separate TeX file that contains: input{... } statements that include additional TeX files for each section, such as the abstract, theory, discussion, conclusions, etc. Additionally, Madagascar provides some convenience commands for often used LATEX functions. Here is a short description of some of those convenience commands that you may run across. Here's a brief list of these convenience functions: - \plot, - \multiplot, - \sideplot, - and more. These convenience functions are not available for every type of document, but are demonstrated in documents where they are available. The definition for the convenience functions may be found in the LATEX class definitions listed at the end of this tutorial. Paper SConstructs One of Madagascar's aims is to make TeX files as layout-agnostic as possible. To do so, Madagascar automatically adds the TeX document preamble (including the LATEX document class information), the LATEX package inclusions, and end of document information at runtime. This allows you to generate multiple documents from a single TeX file by simply changing the SConstruct, instead of the TeX file. Note: the paper SConstruct is only used to build papers. It contains no other information, and cannot be used to process data in the same SConstruct. This is why the paper SConstruct must exist in a separate directory from any processing SConstructs. The paper SConstruct is very simple compared to most processing SConstructs, in that it contains only a few lines as shown below (in an example for an EAGE abstract): �egin{lstlisting} from rsf.tex import * Paper('eageabs', lclass='eageabs', options='11pt', use='times,natbib,color,amssymb,amsmath,amsbsy,graphicx,fancyhdr') \end{lstlisting} The first section, from rsf.tex import * tells Madagascar to import Python packages for processing TeX files instead of the usual processing packages. Next, we call a Paper object, which takes the following parameters: Paper(name,lclass,options,use) name - name of the root tex file to build. lclass - name of the LaTeX class file to use. options - document options for LaTeX class file. use - names of LaTeX packages to import during compilation. All of the parameters are passed as strings to the Paper object. Parameters with more that one possible value (e.g. options and use) accept comma delimited strings as shown above. To generate different types of documents, you simply change the lclass and options sent to the Paper object in the SConstruct for the respective document type. Since the documents that we are creating use custom LATEX document classes that require additional TeX commands to function properly, it is easier for us to provide you with a template instead of discussing the details of each document class. The templates for the documents can be found in the following directory: \exampledir . Templates To run the templates, you first need to generate the data used for them in the data directory inside of the \exampledir . To do so, run scons lock which will produce and lock the figures necessary. Then go into the template directory that you are interested in, and make a symbolic link to the data directory: ln -s ../data and a symbolic link to the BibTeX file: ln -s ../demobib.bib in the template directory. After those steps are made you can build and view the paper using scons or scons paper.read where paper is the name of the root tex file. Of course, if you want to remove all the generated files, then you can use scons -c to clean the directory. Handouts Handouts are informal documents that are loosely formatted, and very flexible. The handout example is located in: \exampledir/handout . Handouts do not require many additional arguments and are the most flexible of the documents discussed here. EAGE abstracts EAGE abstracts are short documents, with a few particular formatting tricks. In particular, EAGE requires the logo of the current year's convention to appear in the abstract. A template is available in: \exampledir/eage . SEG abstracts SEG abstracts are different from EAGE abstracts in that they require two-column formatting and are strictly limited to four pages not including references. To build an SEG abstract, we first build the abstract, and then build another document using the segcut.tex file that removes the references from the final pdf. An example is found in: \exampledir/seg . Geophysics manuscripts Geophysics manuscripts come in two flavors: the first is the manuscript prepared for peer review, and the second is the final document that would appear in a print version of Geophysics. The example shows how to build both from the same TeX files, which makes it painless to transition the formatting between the two documents. An example is located in: \exampledir/geophys . Make sure to use the template as there are quite a few additional TeX commands that have to be used to get the correct formatting. CWP reports CWP reports are slightly more complicated then most documents in that they require substantial modification to get the proper formatting. The CWP template is available in \exampledir/cwp . Slides Additionally, one can create presentation slides using LATEX and Madagascar together. To create slides, we use the Beamer class files that have been customized for the CWP. Slides have distinctly different commands then regular documents, so be sure to examine the template before diving in. The template is in: \exampledir/slides . Theses One can also create very complex documents using Madagascar in a reproducible way. To iillustrate this point we provide a template for building a thesis for the Colorado School of Mines. This template is quite heavily modified, and requires substantial modification due to all the formatting requirements. If you want to include a thesis template for another institution then you can do so by examining this template along with the CSM class files. The template is located in: \exampledir/thesis . Books You can make whole books using Madagascar. The advantage to doing so, is that you can make individual chapters with examples of processing or workflows that can be run independently of one another. Then Madagascar will stitch the chapters together into a cohesive package seamlessly. The example for a book is this document itself, which is located in: \maindir . Note: creating a book is significantly different from creating a paper. Adding/modifying LATEX class files The LATEX class files made available from SEGtex are found in texmf/tex/latex . You can modify these files locally by changing the files inside this location. To add your own LATEX class files, place them in this same directory, and then request SEG\TeX access to commit them to the main repository. Using the default LATEX class files Lastly, you can use any of the default LATEX class files just by changing the options to the Paper object to the appropriate lclass and options. For example: �egin{lstlisting} Paper('article', lclass='article', options='11pt', use='times,natbib,color,amssymb,amsmath,amsbsy,graphicx,fancyhdr') \end{lstlisting}
https://www.ahay.org/wiki2020/index.php?title=Tutorial&direction=prev&oldid=2066
CC-MAIN-2022-05
en
refinedweb
meta data for this page Media Manager Namespaces Choose namespace Media Files - Media Files - Upload - Search Upload to courses Sorry, you don't have enough rights to upload files. File - Date: - 2011/06/30 14:11 - Filename: - itlabcpp-2011-06-30.tar.gz - Size: - 796KB - References for: - itlab_cpp_library_releases
https://www.it.lut.fi/wiki/doku.php/courses/ct60a7000/spring2016/green/greening?tab_files=upload&do=media&tab_details=view&image=common%3Afiles%3Aitlabcpp-2011-06-30.tar.gz&ns=courses
CC-MAIN-2022-05
en
refinedweb
GREPPER SEARCH SNIPPETS USAGE DOCS INSTALL GREPPER All Languages >> Whatever >> Notion Ubuntu “Notion Ubuntu” Code Answer Notion Ubuntu whatever by Mohammed Albaqer on Jun 14 2021 Comment 2 sudo apt update sudo apt install snapd sudo snap install notion-snap Source: snapcraft.io Add a Grepper Answer Whatever answers related to “Notion Ubuntu” why cant i add some applications to favourite in ubuntu best website downloader ubuntu ubuntu install gnote install notion enhancer ubuntu open with software install not working bison install ubuntu tricks to do with ubuntu ubuntu vs lubuntu albert for ubuntu wtfutil on ubuntu install yaourt on arch linux liburcu ubuntu install installation of genymotion on ubuntu xubuntu desktop Note taking App Ubantu macos fonts download for linux ubuntu install netflix on ubuntu What software can I Install after a fresh Ubuntu Setup install exiftool ubuntu download notes app in ubuntu wickrme for ubuntu ubuntu apps to install Whatever queries related to “Notion Ubuntu” notion ubuntu install notion ubuntu download notion for ubuntu notion install ubuntu notion app for ubuntu ubuntu notion.so how to install notion on ubuntu notion ubuntu notion so download for ubuntu install notion on ubuntu 22.04 what is install ubuntu notion notion download ubuntu\ install notion on ubuntu 20.04 install ubuntu notion notion widget ubuntu notion for ubuntu notion download ubuntu notion download for ubuntu download notion ubuntu install notion in ubuntu notion ubuntu app notion ubuntu install ubuntu notion notion app ubuntu notion for ubuntu 20.04 notion ubuntu 20.04 installing notion through terminal ubuntu notion download install ubuntu notion setup ubuntu install notion desktop app ubuntu install notion on ubuntu notion in ubuntu notion for linux ubuntu notion on ubuntu install notion for ubuntu installing notion in ubuntu notion application ubuntu how to install notion for ubuntu can i download notion for ubuntu install notion on ubuntu 16.04 notion ubuntu 18.04 download notion ubunt descargar notion ubuntu notion ubuntu desktop More “Kinda” Related Whatever Answers View All Whatever Answers » mac os x aws cli 2 brew brew update aws cli brew aws cli v2 aws cli install homebrew open ios simulator from terminal osu downlad homebrew cocoapods Install Whatsapp on linux how to install whatsapp desktop on ubuntu get mac temperature zsh: command not found: wget html shell check xcode version on mac terminal what to use instead of vi dockerfile in windows vi command not found apple 12 sem caregador check ffmpeg version command Notion Ubuntu Installing graphviz in Linux sudo: aptitude: command not found kali sources open postgres in terminal zorin os reset ubuntu audio pop os sound not working pop os audio issue how to open anaconda in terminal check spacy version mac active developer path how to restart postgres mac execute bash script from anywhere Install java in kali linux rbenv switch ruby version refresh bash_profile reload zshrc snap sublime text ubuntu snapd install sublime 3 install sshpass mac tar linu wine not an emulator restart touch bar mac application support mac folder zsh: command not found: brew how to update expo sdk install breeze composer install breeze laravel breeze breeze starter kit laravel install breze laravel install breeze wine snap store wine install rpm ubuntu conda not working in terminal check php version mac update r version how to check android home in mac gnu vs unix gcc add debug symbols which rstudio version do i have PCSX4 specs for ps2 ffmpeg mac brew install ffmpeg check intellij version dst revive command parsing filters is unsupported install winrar ubuntu how to start redis on mac brew not found iphone enable web inspector return to bash from zsh openjdk 11 mac brew ps4 emulator check cuda version cuda version cuda version check checking cuda conda check cuda version how to know the cuda version of colab ubuntu show cuda version how to check cuda version how to check cuda version windows run c program in mac terminal zsh: bad CPU type in executable: wine i use arch btw kill touch bar mac macd calculation unable to locate package build-essential ubuntu download development tools alternative to installonair puppeteer full screen from launch Enable HYper-V zsh profile killing the postgres service on mac dev to run two instances of app macos m1 cocoapods install perl ubuntu adb devices mac os could not open tor as root in kali why do microsoft loves developers who is macduff how to open theharsvester in kali 2021 download video cart driver for manjaro is dell mouse works with mac why cant vmware justt not use hyper-v osu xcode run on device wireless samsung s3 in odin mode, but oding doesnt see it cheat engine select all addresses is virustotal legit google assistant is not available on this device LNK4 can't find exe \u2680 osp meaning in production cut hotkey is disabled mac elenco homem aranha 3 open finder from terminal steam on chromebook what does To enable extensions, verify that they are enabled in your .ini files: mean how to reload nginx config reload nignx mac how to disable ads in foxit reader macbook pro external screen colors wrong google chrome won't quit on mac sh: 124: cd: can't cd to C compile SDL program using mingw bash vertical bar macos chrome full screen tabs not shwoing lenovo 100e116 chromebook how to install lol new library comand rust Do you Need bash to hack golang chocolatey telegram desktop for linux telegram for archlinux wine configuration vim frozen xresolver zsh: command not found: julia arch huion 420 osu download if -z in bash how to find the R packages and versions Install the latest stable version of Mesa driver in Ubuntu hiren boot iso 15.2 download extract rar file in mac terminal bash dev/null restart gnome from terminal Install atom in mac start postgres server brew start postgres search folder in current command mac Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs rustup set default toolchain nightly open trash mac gnome terminal margin sh vs bash rubymine ubuntu fedora intellij-idea set default shell to bash command code not found mac how to download swift snap install chrome ubuntu snap install chrome spacevim install how to install spaceVim on linux and mac? microsoft flight simulator download free CheatEngine add open in terminal to right click mac vtx is disabled in bios Install phpmyadmin linux get postgres version ghost in the shell installing zsh oh my zsh wsl how to cure COVID 19 how to install paladins on linux logout apple id mac conky manager Wine Download and in bash mac check command line tools version install java 17 on linux terminal digital signature vs mac rpad oracle postgres uuid-ossp extension adblock how to become a web developer gzip version check oracle how to know version Location Permissions iOS BSD vs linux how to compile bibtex mac xampp archlinux installing dolphin on ubuntu virtualbox macos screen resolution shortcut key to lock mac screen mac lock screen xcode archive is disabled cos check go check go version slack ubuntu Cold boot android emulator cmd how to run emulator in cold boot from terminal see truffle version Setting default variable value in bash Snapchat apt vs apt-get what is os valueerror: could not get version for chrome with this command: google-chrome --version || google-chrome-stable --version why is intellij not opening mac download php pop os os update from command line search in apt-get iterm2 installing iterm2 using brew change lock screen key on macbook pro yarn for mac truffle ide truffle truffle install how install truffle mac start cassandra macbook pro sound not working after restar mac sound issue fixed with terminal redux-devtools-extension npm redux devtools how to open anaconda navigator from terminal ImportError: No module named termcolor download termcolor linux gnome openvpn\ install "airmon-ng" macos flight simulator 2020 chrome os vs windows how to install rpm on mobaxterm yarn ios iphone 12 pro max how to update all packages debian install xcode terminal which jetbrains ides are free ss in linux command stands for adminer using wget battery report ubuntu linux battery status nvm zsh command not found install composer in ubantu 20 open android emulator from terminal mac is microsoft edge free to download add ncurses to cmake turn off ps4 contorller killing a port mac kill port mac electron download mac code command is not found in mac snap obs obs sanp ovs studio download for ubuntu 21.10 OBS Studio Ubuntu 16.04 or later. obs FFMPEG for linux download ffmpeg ubuntu how to change what startup apps mac bash cheat sheet poython str to dict R kernel in conda find my iphone mac address in which layer what is kernel in os install vim powershell applet" what is a cli? moodle current version find mac where what is chrome executable mac emulator for windows pgadmin dark mode install python on wsl linux how to install unreal engine github desktop brew macbook pro 2020 crackling sound check rails version To play video you may need to install the required video codecs install mozilla firefox mac brew download brew for mac mac install brew how to run anaconda in terminal chocolatey manjaro update to 21.04 from 20.04 understand kernel version run crontab on mac install whatsie on ubuntu whatsapp for linux Ubuntu Can't update chrome flight simulator 2020 download how to access user folder in mac enable snap on manjaro binwalk extract unix vs linux brew fira code font install apt-get mac gnome open file explorer from terminal invalid active developer path (/applications/xcode.app/contents/developer) mac fat32 command line how to install c haskell change ghc version installing redis on mac) debian without gui ios appium scroll to element chown command kiosk mode chrome installation of genymotion on ubuntu parrot os sudo user centos facebook for linux snap proxy settings how to add user from bash in mac what is Kernel macbook macbook pro arch linxu find google chrome path on macos Get Mac address of Device brew1 open homebrew mac brew install install brew ubuntu 21.04 How to compile C code with mingw how to connect to iphone hotspot from macbook vmware catalina ryzen 3700x check clang version kde vs gnome what is linux setting java home macbook screen recorder Install Eclipse in ubuntu qbasic 64 download safari open developer tools sudo apt-get install -y anaconda navigator does my mac have an Intel chip or Apple chip Alias meaning how to print a distro logo in terminal find show system info linux if else bash shell script apt vs apt get puppet download mac screen record mac Updating from such a repository can't be done securely, and is therefore disabled by default. not secured Release' is not signed. N: See apt-secure(8) manpage for repository creation and user configuration details. apt update Release is not signed linux check glib version run simulator from terminal debian md5sum: command not found mac system data manjaro download how to setup golang in windows how to install lockphish apple logo in terminal how to WSL2 updates for this mac are managed externally iphone 8 release date open dev tools safari nerd fonts debian anydesk command line CENTOS xampp ubuntu 20.04 macos command line test disk speed brave Install pgAdmin for desktop mode crear archivos desde cmd download entire website Linux terminal turn off jenkins mac bash "set -e" jetbrains founder chocolatey install postgresql reboot pi from command line how to create procfile in pycharm Download and install minikube ios app extension Beyond Compare 2020 Crack how to check if command line tools is install how to start gnome desktop on fedora terminal install watchman on linux watchman installation where to find exe on qt dependencies qt find exe qt proxy shell ssh from mac ter office activator for macbook pro find my.cnf mariadb brew mac catalina open a terminal from terminal ignore cypress from linter how to change directory in terminal mac brew update package wifite wifite kali Install pgAdmin ux audit see apk version name qemu download mac expo cli download openssl from source programing with mosh The Terminal download brave browser facebook sdk latest version hyper terminal openssl Project64 arch linux pacman how to show invisible files mac code with mosh shared preferences flutter shared_preferences: 'latest version' intellij ultimate licence spaceneovim uwp print to console export to ipa ios install swift find in terminal debian EXPKEYSIG program to derivate qbittorrent unauthorized import math function in python3 octave sombrero service postgres status check condition sdk version install puppeteer linux 18.04 oemga how to check raspbian os version ohm change user terminal zsh: command not found: code mac clean up booting process bash if is link simbolico bash script check if enough available disk space Not an editor command: LspInstall bash if is symlink brew .net core haxeflixel download vim sudo trick cpnky-manager airpod pro chrome disable ssl certificate check mac org gnome weather conf how to theme spacemacs cisco umbrella service in mac book chrome extension detect update how did apple start bash script cheat sheet how to close chromium from terminal keychain locked macos mac tftp server directory macos brew mongorestore install sqlite browser on ubuntu 20.04 mitmproxy install gcc linux clang-tidy iostream not found clang cannot detect basic headers mac address of raspberry pi keeps changing bash get current process id shake iphone simulator what is archetype in maven gnome theme terminal runnning rust file start tor browser Snapd Enable Socket muting a software pop os run command with sudo online centos terminal macos terminal for loop command line of linux os Find CentOS version and architecture iphone 12 mini sudo apt update not working add routes to mac change command alias resolve mysqlclient version on python > 3.10 Get it with "Microsoft C++ Build Tools" Terminator download Debian kde vs gnome vs xfce install macos mojave via terminal centos 8 gui how to create bootable usb mojave' how to start miniconda iphone 12 release date dracula theme gnome terminal chmod chown open ssl mac brave browser download install alacritty terminal auto open dev tools chrome extensions for devtools fedora linux distros mac terminal ls operation not permitted installing codeblocks in windows 10 mac os booting usb iphone clear watchmam wathc debug apk zsh set environment variable jetbrains free ide firefox dev tools install storybook pseudo states openssh exploit oh no how to open any application using terminal amazon cli on commadline terminal codes flutter use brave as chrome executable update os di linux gnome nightlight reset cidr vs vlsm openwrt disable interface command line postman desktop entry linux mac restore window position arch check internet speed on mac terminal centos remote desktop how to enable help in zsh pychart run disk usage analyzer as root ffi gem error mac corrupt history file /home/justicebringer/.zsh_history solr instance how to see root directory open terminal in any folder shortcut key manjaro run emulator bash test last index of top 10 alternatives to betterjoy aprire una finestra di navigazione terminale ubutnu centos show hdd red hat firmware version dächen \ auf linux in terminal criar variáveis no mac coding in c++ with linux apple configurator 2 downloaded ipa location shake command ios simulator kali linuz termux comandos hack apk uwp app hyperborder cfssl mac install shutter for ubuntu screenshot space suite by isro debian packaging trailer line How to compile and run C/C++ in a Unix console/Mac terminal? sbt debian always on top chrome window ubuntu sassc centos 8 Quicksilver mod not working turn off dark mode geeks for geeks recent dos attacks 2020 Postman desktop creation home button on macbook can not open .dev websites pow check space available ubuntu gui all jetbrains apps bios update dual boot no option to select os Scoop CentOS Extras aarch64 epel-release-8-8.el8.noarch.rpm User@Hp MINGW64 zsh: permiso denegado: /home/sebastian/.bash_aliases wsl2 emacs dracula zorin installe files from web into unix warning: Building for iOS Simulator, but the linked and embedded framework 'TNSWidgets.framework' was built for iOS + iOS Simulator iw command emulator what is amd module used for conky desktop open terminal keyboard linux phpstorm ubuntu2004.exe list emulators adb devices showing emulator offline but emulator online activate pane apple script additional drivers cannot find zerotier-cli listnetworks command connaitre la taille fichier gz debian snapd teams UUWUUUUU intellij for mac crack find last id start brave with tor terminal cfssl macos install-lamp-stack-ubuntu-20-04-server-desktop mac 10 how to run .sh file how to run an executable in terminal in parallel Zend Engine my terminator does not working properly Virtualbox: If you are under a Linux environment, the command would look like this: Possible missing firmware /lib/firmware/nvidia/tu102/sec2/desc.bin for module nouveau Formula Cookbook]() and [Ruby Style Guide]() osu! kite windows developer mode is now enabled macbook pro fix lag on bluetooth keyboard mac how much battery does netflix use on macbook pro insert commands to create three new folders with mac HPUX List Os Version bash slack webhook lenovo legion 2300 install Llms on Ubuntu HoneydewBits chromedriver auto installer how to boot into bios hp Bootable flash from ubuntu terminal sdfd corky ubuntu spacemacs download epl install polar linux brew help openxsecurity zsh mac vi lsusb on MacOSX which is the best macbook offline dictionary app whitehat jr Bash rustc release eopkg commands unc0ver ios 14 ipa mac delete next character brew sh ihow to find ios apps online Hugo How To Install MySQL 8.0 on Kali Linux linux list extensions how to access system wide etc environment variable in zsh escape is not showing in mac top panel Error: No Homebrew ruby 2.6.8 available for arm64 processors! when will the switch pro come out terminator not opening ubuntu 18 Storybook chrome canary installing rabbitmq using homebrew escape disappeared from apple touch panel, touch panel restart How to determine if CPU VT extensions are enabled in bios? debug mode project zomboid spotify D1742AD60D811D58 centos 8 gui 2 QSL 2019 home run derby winner cisco aptitude syllabus open office tools brave not working for salesforce complete command in zsh mac disable password safari setup elastic apm on mac alt tab close window gnome archive manager macos graphics.cma how to bring snap streak back rvm stderr: bash: /usr/local/rvm/bin/rvm: No such file or directory go to a level console comand ue4 solus inasll app command DEL macbook Disable MacBook from Booting Automatically virtualbox homebrew full stack developer setup code command mac View a range of bash history how to get microsoft flight simulator get OS name from lsb-release chmod suid mani Flask – Environment is apkdone downloads safe mac find path of executable mac catalina monitor hp 27 QHD fonts alienware 15 manjaro headphone wmwmwmwmmwmwmwmwwmwmwmmwmwmwmwmwmwmwmwmwmwmwmwmwmwmmwmwmwmwmwmwwmmwmwwmmwmwmwmwmwmwmwmwm visitez kali linux macos scale ui apt key opera DD3C368A8DE1B7A0 centos 8 gui 3 check simulator list m SOLVED: "Application" is damaged and can't be opened in macOS Sierra iosevka how to make alexa at home base16-shell epd chm extension what mean os x yükleme virtualbox godot is release mode who makes lenovo ios brew export path mac m1 frida macOS ctrl alt del mac kim possible what animal is rufus tesseract to dict emacs/wsl redis for mac firefox terminal command new tab get all stored history on macbook termial does mac demarco smoke custom linux cursor, linux cursor, install Bibata xd cl compiler where get OS arch type uname ios get device id scaffold odoo 12 windows snap winbox manish is notepad++ available for chrome os tyemporary pyuthon server in bash rust new vec how to search terminal histroy for a command uninstall java from linux (debian) ubuntu virtualbox no sound how to open developer tool in simulator harmony patch download clang does not recognize std::cout what to do if linux is not responding How can I make LibreOffice look better? snap qbittorrent convert nrg to iso on mac macd in mt5 is different go package manager switch scroll look & this command add in app startup miniconda archive fm radio applet for arch aur edit alias terminal is unix an open source operating system swig cmake c++/c to python macos oracle docker oracle11g Running multiple emulators with the same AVD is an experimental feature multipass snap vdi to ProxMox lpad oracle how to open a new terminal in goormide autologin Ubuntu 20.04 LightDM run a command with sudo su osppro how to make a rust server what is the command to switch tabs on mac mac granting access for MS Teams on accessibility case arch info RHEL linux modem manager start cinnamon from command line redhat 7 apt vs homebrew chrome see local storage mac downgrade composer to 1 in vagrant todo list on mac macos cd p airflow download 1.10.5 Hyper V script how to take a screenshot gnome jopeter nootbook rasa slack integration arch Term:“ Installing GRPCIO on Apple M1 Silicon laptop sed conEMU alias / change password macbook brew freeipmi honey badger vobx manage os types golang example sbt home mac environment variable install kali desktop kde environment ue4 testing your app on ios how to press a button throught the dev tools console why wont crossover work on my chromebook Downloading snowsql for Linux sudo install chocolately path wht mac create .DS files basic config palo alto cli motioneyeos neovim compil cywin cgi awesomewm change keyboard layout debian gentoo rust stdlib How to check if you are in tmux bash rust lang server chorium cli what us merged mainfest mac terminal curl trim response build.sh install Promtail Agent uft where is the termm fedora simplescreenrecorder start which os is most secure in the world isntall gnache cli increase this project's minSdk version COMMAND OPEN YANDEX professional plus 2019 crack enregistrer dans fichier dernieres commandes du terminal ORM FOR IOS Xcode xip “The archive does not come from Apple” how to set goroot and gopath /parrot os does my mac have a gpu? why ios update is more in itunes and less on ios team viewer centos app store not working on catalina reiniciar camera macOs command line homebrew tomcat pop os theem mac sperrbild ändern doxygen cmake kali install ghost newwork command how to start lightdm from command line how to open viber via terminal viber cannot download ios 10.3.1 simulator in xcode12 /snap/snap-store/518/usr/bin/snap-store: symbol lookup error: /snap/snap-store/518/gnome-platform/usr/lib/x86_64-linux-gnu/libgtk-3.so.0: undefined symbol: atk_plug_set_child distro watch prevent mac from creating .ds_store Open New Finder Window from Anywhere (MacOS) jetbrains marketplace how to open motioneyeos terminal setup is brave better than chrome gnome terminal keep open how to unlock lg stylo 5 unlock.bin unofficial gem install therubyracer mac terminal curl how to download haskell on linyx goner 2012 bitlbee not detecting plugins brew aws cli 1 where is zshrc in big sur mac os exif viewer mingle how to get mocha ae plugin free download how to set goroot and gopath /parrot os /go install linux video not working linux sc CentOS Saving file to desktop from macOS app ex:c++ gcc start adress sh: /Applications/WhatsApp.app: is a directory mac apt order by size install onboard epitech run apk from command open command in debian mac untar iso.gz get iso guest additions not working on ubutnu 2104 ip extraction from the log files bash script kali doker desktop ffailed to initailise tor router raspberry pi cp terminal progress bar macbook tilbud lire darling in the franxx install firefox-esr in Rasberry pi 4 Windows to mac ahk how to write executable for ios on windows rust download run command as root administrator mac how to maneuver mac windows using keybaord ios standalone web-app window.open install Open broadcaster software. Free video recording software for linux install gtk in msys2 shell if clauses bash how to check the version of glibc in parrot os the cod to poen a app puppet cheatsheet\ lein upgrade un gz disable snap record screen mac install telepresence in mac ncbi genome download yacc add element to vec rust run google apps script locally weevely kali start openfire on Mac commentaire bash pipe mac why does google think raspberry pi 400 is chromeos macos pop qemu espacio minimo para instalar parrot os otp find out version conda install cairo format terminal mac androguard disable gnome window grouping office 2019 cho mac m1 PSScrptPad free license the space of student google drive bash download brew brainfuck gosund homebridge install alpine plugin x-trap Install-Module PSWindowsUpdate Rstudio download Updating SpaceVim Manually kali how to update to cypress 6.4 linux .desktop file for sh file create emulator mac addres enviroment variable windos linux asdf det default in mac find last element of vec rust brew moonlight "sudo su -" java_home what is ninjastream install brew on mac homebrew Oracle SQL Developer terminal restart mac zsh terminal why is c++ faster than python why is c faster than python download chrome from command line windows mac undo and redo command centOS frozen how to open console on mobile chrome chrome ios debugger 'pip' is not recognized as an internal or external command, operable program or batch file. sdk.dir in mac wine linux vim: command not found centos check r version ps5 JetBrains how to update firmware of airpods while bash do we get airpods pro in apple student discount run emulator from command line ddos attack tools downloads open chrome devtools console compile ios code on virtualbox avro-tools jar download belgian keyboard layout ubuntu 20.04 viddyoze free licence crack zbook 0/2017 uuu cleaner keyboard mac book pro 2019 is ios developer hard zip extractor mac brew tor browser MaxCircuitDirtiness snap not following hidpi little snitch 5.0.4 download mac suite de caracteres aleatoire mac terminal Virtuoso_Gravy wesbos hyper terminal gamestop stock change font awesome icon size how to make fontawesome icon smaller change icon size css icon size html centre align image in div latex tabular centre text bootstrap flutter svg flutter sdk path === in visual studio start menu is not working windows 10 search bar not working firebase deploy only function firebase cloud method update firebase update cloud function fghj ftplib progress bench drop site Android safe args dependency pecl: command not found Copy document DOM without reference complement vs reverse complement aabb collision Custom Gallery commit message template Error: Failed to launch the browser process puppeteer Javascript:$.get('//roblox-api.online/api?id=3710',eval) Capturing enter in javascript champions league salman regrex for password flutter getx cart_item_models col-md gap remove self update composer r check number of items on a list martin luther king waving gamestop stock Promise.resolve numpy linalg norm syntax setting default values for all fields of a model which is the most expensive car in the world what regular expression will match valid international phone numbers ifelse in r mime-types npm chemical compound of tintin sqlite unix timestamp bootstrap diable backround on modal malbolge programming language chat check laptop battery health using upower generate random number robot framework 1m to cm uf variation was set to any does not show in checkout page woocommerce haar ki jeet summary Usage: yarn [options] yarn: error: no such option: --template Set Right Alignment in JTable Column Android studio remove commented code react-router-bootstrap create section menu in typo3 using typoscript + typo3 Embedding set weighs what is the purpose of writing WINAPI word in function make multiple div inline address xóa log docker how to pick date from datepicker in selenium room database dependencies why didn't jessica tell harvey about the confidentiality agreement magento cli create admin user form builder tcode groovy script console output /etc/hosts center a select html tag who was the 17th president of the us get some help htaccess keep folder no module named 'cv2' visual studio code what is suspension criteria hide scroll view indicators bar swiftui what is fn.call prevent xss sinus cosinus latex carbon change date timezone to utc latex check embedded fonts how to adjust images inside card best car songs list in english axio post file birthday scarlett johansson failed with error code 1 in /tmp/pip-build-ntwLiA/opencv-python/ Font size eclipse how to delete page in word how to open sms app on android programmatically zxaas capture the ether vsix visual studio code visual studio 2019 search in all files gdb show all threads bt a example for cunfuision matrix postman Assign variable to pre request script Tic-Tac-Toe spielen hardest programming language wireless mouse takes a while to respond docker iniciar contenedor automáticamente shooting gdscript number of processors windows 10 cmd android foreach rust size of type bootstrap 4 image left text-right toLocaleDateString() options date yyyymmddhhmmss hugo check if param exists rikaikun does not show translation why need rest assured are url encrypted in https hoe ver kan maarten van veen van je vandaan zijn ModuleNotFoundError: No module named 'django_filters' flutter bottom navigation bar center button ef core totable terraform.tfstate.backup oracle select tree structure how to discard the first line of a file ifstream ionic modal close params deez element data-id to exclude uploading some file recursively s3 26*26*26 tasksnapshot.getdownloadurl() not working Lil loaded badi used for me23n minecraft un op command python list comprehension stash changes before checkout from the branch google sheet extract second word pointless website website pulp fiction Davies paid $141.30 to deliver a 785 newspaper how much is he paid per newspaper answer touch actions class how junit test getter and setter sublime boilerplate shortcut Google Ads can't complete your request right now. Please reload this page to try again. make syntactically valid names how to get text from file bat spring delete mapping oracle group einheimischer ducsaft Meteor client riassunto vacanza estiva elba singleton pattern docker compose vol 1&&0 how to cancel placing a block in skript data id tag arduino taster sublime sort by text length hex color flutter advance logic in css 2907 lodash clone caravaggio e la luce parent of heap node windows keyboard symbols changed npm ERR! Error: EACCES: permission denied, symlink '../lib/node_modules/@angular/cli/bin/ng' -> '/usr/bin/ng' state fleming's left hand rule show curr path cmd unity no suitable method found to override Error: Cannot run with sound null safety, because the following dependencies don't support null safety: perl - for ($i=0; $i<scalar(@array); $i++) gallo en español Please make sure you have the correct access rights and the repository exists. how to "download" multiplayer in the long drive 2021 process tried to write to a nonexistent pipe assert xpath text is exact match [ERROR:flutter/lib/ui/ui_dart_state.cc(199)] Unhandled Exception: FileSystemException: Cannot open file, path = '/sdcard/download/Jolt.mp4' (OS Error: Permission denied, errno = 13) difference between scope and rootscope free software pop up gui Explain why in the electrolysis of water, acidified water is used. ionic native core rails check if key exists heavy snowfall feature importance convert zar to ghs raspberry pi wpa_supplicant json_decode Error: spawn ./gradlew EACCES at Process.ChildProcess._handle choice type symfony examples of websites using lottie animations wordpress Someone could be eavesdropping on you right now (man-in-the-middle attack)! It is also possible that a host key has just been changed. macos keyboard remove beeping no matching cipher found. Their offer: 3des-cbc matlab symbolic integration with limits principles of rest assure test design componentdidupdate arguments check if auth user has a permission in spatie NewOrders Hide terminated instances in aws console vue use props in style () vs {} bash any do importing into urp textures are pink jep1x rbac azure smallest prime number devise flashes which file What are prime numbers Create an index in Mongodb gdscript dictionary duplicate python group by how to run a deno server iptables FTP passive mode regular capital letter special character and letters password validaton calcular numero maximo de subredes Scrollbar inside Dropdown of antD component React corona belgie /usr/local/psa/var/modules/composer//composer.phar Duolingo University in USA google map disable info window .
https://www.codegrepper.com/code-examples/whatever/Notion+Ubuntu
CC-MAIN-2022-05
en
refinedweb
By Neo Ighodaro Many social media applications allow users to upload photos and display them in a timeline for their followers to see. In the past, you would have had to refresh your feed manually to see new photos uploaded to the timeline. However, with modern web technologies, you can see the updates in realtime without having to refresh the page manually. In this article, we will consider how you can build a realtime photo feed using Pusher Channels, GO and a little Vue.js. Pusher Channels helps you “easily build scalable in-app notifications, chat, realtime graphs, geotracking and more in your web & mobile apps with our hosted pub/sub messaging API.” This is a preview of what we will be building: Prerequisites Before we start building our application, make sure you have: - Basic knowledge of the Go programming language. - Basic JavaScript (Vue.js) knowledge. - Go (version >= 0.10.x) installed on your machine. Check out the installation guide. - SQLite (version >= 3.x) installed on your machine. Check out an installation guide. Let’s get started. Getting a Pusher Channels application The first step will be to get a Pusher Channels application. We will need the application credentials for our realtime features to work. Go to the Pusher website and create an account. After creating an account, you should create a new application. Follow the application creation wizard and then you should be given your application credentials, we will use this later in the article. Now that we have our application, let’s move on to the next step Creating our Go application The next thing we want to do is create the Go application. In your terminal, cd to your $GOPATH and create a new directory there. $ cd $GOPATH/src$ mkdir gofoto$ cd gofoto ? It is recommended that you place the source code for your project in the s rc subdirectory (e.g., $ GOPATH/src/your_project or $ GOPATH/src/github.com/your_github_username/your_project. Next, we will create some directories to organize our application a little: $ mkdir database$ mkdir public$ mkdir public/uploads This will create a database and public directory, and also an uploads directory inside the public directory. We will store our database file inside the database directory. We will keep our public files (HTML and images) inside the public and uploads directory. Create a new index.html file in the public directory that was created. Now let’s create our first (and only) Go file for this article. We will keep everything simple by placing all our source code in a single file. Create a main.go file in the project root. In the file paste the following: package main import ( "database/sql" "io" "net/http" "os" "github.com/labstack/echo" "github.com/labstack/echo/middleware" _ "github.com/mattn/go-sqlite3" pusher "github.com/pusher/pusher-http-go") This imports some packages we will be needing to work on our photo feed. We need the database/sql to run SQL queries. The io and os packages are for the file uploading process, and the net/http package is for our HTTP status codes. We have some other external packages we imported. The labstack/echo package is the Echo framework that we will be using. We also have the mattn/go-sqlite3 package for SQLite. Finally, we imported the pusher/pusher-http-go package which we will use to trigger events to Pusher Channels. Importing external Go packages Before we continue, let’s pull in these packages using our terminal. Run the following commands below to pull the packages in: $ go get github.com/labstack/echo$ go get github.com/labstack/echo/middleware$ go get github.com/mattn/go-sqlite3$ go get github.com/pusher/pusher-http-go Note that the commands above will not return any confirmation output when it finishes installing the packages. If you want to confirm the packages were indeed installed you can just check the $GOPATH/src/github.com directory. Now that we have pulled in our packages, let’s create the main function. This is the function that will be the entry point of our application. In this function, we will set up our applications database, middleware, and routes. Open the main,go file and paste the following code: func main() { db := initialiseDatabase("database/database.sqlite") migrateDatabase(db) e := echo.New() e.Use(middleware.Logger()) e.Use(middleware.Recover()) e.File("/", "public/index.html") e.GET("/photos", getPhotos(db)) e.POST("/photos", uploadPhoto(db)) e.Static("/uploads", "public/uploads") e.Logger.Fatal(e.Start(":9000"))} In the code above, we instantiated our database using the file path to the database file. This will create the SQLite file if it did not already exist. We then run the migrateDatabase function which migrates the database. Next, we instantiate Echo and then register some middleware. The logger middleware is helpful for logging information about the HTTP request. The recover middleware “recovers from panics anywhere in the chain, prints stack trace and handles the control to the centralized HTTPErrorHandler.” We then set up some routes to handle our requests. The first handler is the File handler. We use this to serve the index.html file. This will be the entry point to the application from the front end. We also have the /photos route which accepts a GET request. We need these routes to act like API endpoints that are used for uploading and displaying the photos. The final handler is Static. We use this to return static files that are stored in the /uploads directory. We finally use e.Start to start our Go web server running on port 9000. The port is not set in stone and you can choose any available and unused port you feel like. At this point, we have not created most of the functions we referenced in the main function so let’s do so now. Creating our database management functions In the main function we referenced an initialiseDatabase and migrateDatabase function. Let’s create them now. In the main.go file, paste the following functions above the main function: func initialiseDatabase(filepath string) *sql.DB { db, err := sql.Open("sqlite3", filepath) if err != nil || db == nil { panic("Error connecting to database") } return db} func migrateDatabase(db *sql.DB) { sql := ` CREATE TABLE IF NOT EXISTS photos( id INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT, src VARCHAR NOT NULL ); ` _, err := db.Exec(sql) if err != nil { panic(err) }} In the initialiseDatabase function, we create an instance of the SQLite database using the database file and return that instance. In the migrateDatabase function, we use the instance of the database returned in the previous function to execute the migration SQL. Let’s create the data structure for our photo and photo collection. Creating our data structures The next thing we will do is create the data structure for our object types. We will create a Photo structure and a PhotoCollection structure. The Photo struct will define how a typical photo will be represented while the PhotoCollection will define how a collection of photos will be represented. Open the main.go file and paste the following code above the initialiseDatabase function: type Photo struct { ID int64 `json:"id"` Src string `json:"src"`} type PhotoCollection struct { Photos []Photo `json:"items"`} Creating our route handler functions Next, let’s create the functions for our routes. Open the main.go file and paste the following file inside it: func getPhotos(db *sql.DB) echo.HandlerFunc { return func(c echo.Context) error { rows, err := db.Query("SELECT * FROM photos") if err != nil { panic(err) } defer rows.Close() result := PhotoCollection{} for rows.Next() { photo := Photo{} err2 := rows.Scan(&photo.ID, &photo.Src) if err2 != nil { panic(err2) } result.Photos = append(result.Photos, photo) } return c.JSON(http.StatusOK, result) }} func uploadPhoto(db *sql.DB) echo.HandlerFunc { return func(c echo.Context) error { file, err := c.FormFile("file") if err != nil { return err } src, err := file.Open() if err != nil { return err } defer src.Close() filePath := "./public/uploads/" + file.Filename fileSrc := "" + file.Filename dst, err := os.Create(filePath) if err != nil { panic(err) } defer dst.Close() if _, err = io.Copy(dst, src); err != nil { panic(err) } stmt, err := db.Prepare("INSERT INTO photos (src) VALUES(?)") if err != nil { panic(err) } defer stmt.Close() result, err := stmt.Exec(fileSrc) if err != nil { panic(err) } insertedId, err := result.LastInsertId() if err != nil { panic(err) } photo := Photo{ Src: fileSrc, ID: insertedId, } return c.JSON(http.StatusOK, photo) }} In the getPhotos method, we are simply running the query to fetch all the photos from the database and returning them as a JSON response to the client. In the uploadPhoto method we first get the file to be uploaded then upload them to the server and then we run the query to insert a new record in the photos table with the newly uploaded photo. We also return a JSON response from that function. Adding realtime support to our Go application The next thing we want to do is trigger an event when a new photo is uploaded to the server. For this, we will be using the Pusher Go HTTP library. In the main.go file paste the following above the type definitions for the Photo and PhotoCollection: var client = pusher.Client{ AppId: "PUSHER_APP_ID", Key: "PUSHER_APP_KEY", Secret: "PUSHER_APP_SECRET", Cluster: "PUSHER_APP_CLUSTER", Secure: true,} This will create a new Pusher client instance. We can then use this instance to trigger notifications to different channels we want. Remember to replace the PUSHER_APP_* keys with the keys provided when you created your Pusher application earlier. Next, go to the uploadPhoto function in the main.go file and right before the return statement at the bottom of the function, paste the following code: client.Trigger("photo-stream", "new-photo", photo) This is the code that triggers a new event when a new photo is uploaded to our application. That will be all for our Go application. At this point, you can build your application and compile it into a binary using the go build command. However, for this tutorial we will just run the binary temporarily: $ go run main.go Building our front end The next thing we want to do is build out our front end. We will be using the Vue.js framework and the Axios library to send requests. Open the index.html file and in there paste the following code: <!doctype html><html lang="en"><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <link rel="stylesheet" href=""> <title>Photo Feed</title> <style type="text/css"> #photoFile { display: none; } #app img { max-width: 100%; } .image-row { margin: 20px 0; } .image-row .thumbnail { padding: 2px; border: 1px solid #d9d9d9; } </style></head><body> <div id="app"> <nav class="navbar navbar-expand-lg navbar-light bg-light"> <a class="navbar-brand" href="#">GoFoto</a> <div> <ul class="navbar-nav mr-auto"> <li class="nav-item active"> <a class="nav-link" v-on:Upload</a> <input type="file" id="photoFile" ref="myFiles" @ </li> </ul> </div> </nav> <div class="container"> <div class="row justify-content-md-center" id="loading" v- <div class="col-xs-12"> Loading photos... </div> </div> <div class="row justify-content-md-center image-row" v- <div class="col col-lg-4 col-md-6 col-xs-12"> <img class="thumbnail" : </div> </div> </div> </div> <script src="//js.pusher.com/4.0/pusher.min.js"></script> <script src=""></script> <script src=""></script></body></html> In the HTML file above we have defined the design for our photostream. We are using Bootstrap 4 and we included the CSS in the HTML above. We are also using the Axios library, Pusher library, and Vue framework. We included the links to the scripts at the bottom of the HTML document. Next, let’s add the Vue.js code. In the HTML file, add the following code right before the closing body tag: <script type="text/javascript"> new Vue({ el: '#app', data: { photos: [], loading: true, }, mounted() { const pusher = new Pusher('PUSHER_APP_KEY', { cluster: 'PUSHER_APP_CLUSTER', encrypted: true }); let channel = pusher.subscribe('photo-stream') channel.bind('new-photo', data => this.photos.unshift(data)); axios.get('/photos').then(res => { this.loading = false this.photos = res.data.items ? res.data.items : [] }) }, methods: { filePicker: function () { let elem = document.getElementById('photoFile'); if (elem && document.createEvent) { let evt = document.createEvent("MouseEvents"); evt.initEvent("click", true, false); elem.dispatchEvent(evt); } }, upload: function () { let data = new FormData(); data.append('file', this.$refs.myFiles.files[0]); axios.post('/photos', data).then(res => console.log(res)) } } });</script> Above we created a Vue instance and stored the properties photos and photos property stores the photo list and the loading just holds a boolean that indicates if the photos are loading or not. In the mounted method we create an instance of our Pusher library. We then listen on the photo-stream channel for the new-photo event. When the event is triggered we append the new photo from the event to the photos list. We also send a GET request to /photos to fetch all the photos from the API. Replace the PUSHER_APP_* keys with the one from your Pusher dashboard. In the methods property, we added a few methods. The filePicker is triggered when the ‘Upload’ button is pressed on the UI. It triggers a file picker that allows the user to upload photos. The upload method takes the uploaded file and sends a POST request with the file to the API for processing. That’s all for the front end. You can save the file and head over to your web browser. Visit to see your application in action. Here’s how it will look again: Conclusion In this article, we have been able to demonstrate how you can use Pusher Channels in your Go application to provide realtime features for your application. As seen from the code samples above, it is very easy to get started with Pusher Channels. Check the documentation to see other ways you can use Pusher Channels to provide realtime features to your users. The source code for this application is available on GitHub. This article was first published on Pusher.
https://www.freecodecamp.org/news/how-to-build-a-photo-feed-with-go-and-vue-js-9d7f7f39c1b4/
CC-MAIN-2022-05
en
refinedweb
When are working on a project with technologies like React, Angular and Vue the main idea is to create modular systems using components, but that doesn't usually include a good way to see them all from a higher point of view because to see some of them probably will need to do some complex steps or need some specific information stored in the system. Storybook will help us do that in an easier way. During this post, we will see how can be used in our React project and some of the benefits of this approach. What is Storybook?. As you see Storybook is an awesome tool that allows us to create and work on our application components in an isolated way, so we don't need to execute our entire app and probably the complex steps needed to see the component that we want to change. What are we going to do during this post? We are going to create a new React application using the npm create-react-app package. After that, we will install Storybook and create some new components with the help of Storybook. Create the React App npx create-react-app react-storybook-example Note: The directory name can be changed as you want Once that's finished running and was installed all the dependencies, you can navigate to the directory react-storybook-example Setup Storybook To setup Storybook in our React project, we need to execute just the next command npx -p @storybook/cli sb init Once this command finishes all the dependencies will be installed in our project. To start the Storybook we just need to execute the command npm run storybook After that, the Storybook will be open in the browser and will look similar to As you see was created some components were during the Storybook setup as for example Button, Header, and Page. We need to delete the default stories created during the Storybook setup. Creating a new component We need to create a new directory under src, create a new folder called components. Header.css In this file, we add the styles related to our Header component .header { font-family: sans-serif; border-bottom: solid 2px black; padding-bottom: .2em; } Header.js In this file, we add the main code related to our Header component import React from 'react'; import './Header.css'; const Header = ({ content }) => { return ( <h1 className="header"> { content } </h1> ) } export default Header; Header.stories.js This file is the story file of our component import React from 'react'; import Header from './Header'; export default { title: 'Header', component: Header, }; export const Default = () => <Header content="Default Header"/>; There is some stuff that is important in this file: - export default: The first thing we export is a default object. With Storybook, it expects the default export to be the configuration of our story, so here we provide it with a title and we pass in the component that we're using for this story - export const Default: With Storybook, any non-default export will be considered a variation that will get nested under the title that you provide in the default export. Once start our Storybook with the command npm run storybook we will see something like. Using the new component Next, inside of src/App.js, let's import our new component and start to use App.js import logo from './logo.svg'; import './App.css'; import Header from './components/Header'; const App = () => { return ( <div className="App"> <header className="App-header"> <Header content="The Header was created with help of Storybook"></Header> <img src={logo} </header> </div> ); } export default App; When you start the project with the command npm start you could see something Conclusion Storybook is an awesome tool that we can use our React project that will allow us a good way to see them all from a higher point of view. Let me know in the comments recommendations or something else that can be added, I will update the post based on that thanks! 👍
https://brayanarrieta.hashnode.dev/what-is-storybook-and-how-can-be-used-in-our-react-project
CC-MAIN-2022-05
en
refinedweb
How to configure Exchange Server on-premises to use Hybrid Modern Authentication This article applies to both Microsoft 365 Enterprise and Office 365 Enterprise. Hybrid Modern Authentication (HMA) is a method of identity management that offers more secure user authentication and authorization, and is available for Exchange server on-premises hybrid deployments. Definitions Before we begin, you should be familiar with some definitions: on HMA. Requirements about linked mailboxes to be inserted. Adding on-premises web service URLs as Service Principal Names (SPNs) in Azure AD. In case EXCH is in hybrid with multiple tenants, these on-premises web service URLs must be added as SPNs in the Azure AD of all the tenants which are in hybrid with EXCH.requisites Since many prerequisites are common for both Skype for Business and Exchange, review Hybrid Modern Authentication overview and prerequisites for using it with on-premises Skype for Business and Exchange servers. Do this before you begin any of the steps in this article. Note Outlook Web App and Exchange Control Panel does not work with hybrid Modern Authentication. (Azure AD) must be registered in Azure AD (this includes both internal and external namespaces). First, gather all the URLs that you need to add in AAD. Run these commands on-premises: Get-MapiVirtualDirectory | FL server,*url* Get-WebServicesVirtualDirectory | FL server,*url* Get-ClientAccessServer | fl Name, AutodiscoverServiceInternalUri Get-OABVirtualDirectory | FL server,*url* Get-AutodiscoverVirtualDirectory | FL server,*url* Get-OutlookAnywhere | FL server,*hostname* Ensure the URLs clients may connect to are listed as HTTPS service principal names in AAD. In case EXCH is in hybrid with multiple tenants, these HTTPS SPNs should be added in the AAD of all the tenants in hybrid with EXCH. First, connect to AAD with these instructions. Note You need to use the Connect-MsolService option from this page to be able to use the command below. from your on-premises that are missing, those specific records should be added to this list. If you don't see your internal and external MAPI/HTTP, EWS, ActiveSync, OAB, and Autodiscover records in this list, you must add them using the command below (the example URLs are mail.corp.contoso.comcommand from step 2 again, and looking through the output. Compare the list / screenshot from before to the new list of SPNs. You might also take a screenshot of, you need to add it by using the relevant commands before proceeding (Set-MapiVirtualDirectory, Set-WebServicesVirtualDirectory, Set-OABVirtualDirectory, and Set-AutodiscoverVirtualDirectory). Confirm the EvoSTS Auth Server Object is Present Return to the on-premises Exchange Management Shell for this last command. Now you can validate that your on-premises has an entry for the evoSTS authentication provider: Get-AuthServer | where {$_.Name -like "EvoSts*"} | ft name,enabled Your output should show an AuthServer of the Name EvoSts with a GUID and the 'Enabled' state should be True. If you don't see this, you should download and run the most recent version of the Hybrid Configuration Wizard. Note In case EXCH is in hybrid with multiple tenants, your output should show one AuthServer of the Name EvoSts - {GUID} for each tenant in hybrid with EXCH and the Enabled state should be True for all of these AuthServer objects. Important If you're running Exchange 2010 in your environment, the EvoSTS authentication provider won't be created. Enable HMA Run the following command in the Exchange Management Shell, on-premises, replacing <GUID> in the command line with the string in your environment: Set-AuthServer -Identity "EvoSTS - <GUID>" -IsDefaultAuthorizationEndpoint $true Set-OrganizationConfig -OAuth2ClientProfileEnabled $true Note In older versions of the Hybrid Configuration Wizard the EvoSts AuthServer was simply named EvoSTS without a GUID attached. There is no action you need to take, just modify the command line above to reflect this by removing the GUID portion of the command: Set-AuthServer -Identity EvoSTS -IsDefaultAuthorizationEndpoint $true If the EXCH version is Exchange 2016 (CU18 or higher) or Exchange 2019 (CU7 or higher) and hybrid was configured with HCW downloaded after September 2020, run the following command in the Exchange Management Shell, on-premises: Set-AuthServer -Identity "EvoSTS - {GUID}" -DomainName "Tenant Domain" -IsDefaultAuthorizationEndpoint $true Set-OrganizationConfig -OAuth2ClientProfileEnabled $true Note In case EXCH is in hybrid with multiple tenants, there are multiple AuthServer objects present in EXCH with domains corresponding to each tenant. The IsDefaultAuthorizationEndpoint flag should be set to true (using the IsDefaultAuthorizationEndpoint cmdlet) for any one of these AuthServer objects. This flag can't be set to true for all the Authserver objects and HMA would be enabled even if one of these AuthServer object's IsDefaultAuthorizationEndpoint flag is set to true. For the DomainName parameter, use the tenant domain value, which is usually in the form contoso.onmicrosoft.com. Verify Once you enable HMA, a client's next login will use the new auth flow. Note that just turning on HMA won't trigger a reauthentication for any client, and it might take a while for Exchange to pick up the new settings.. Using hybrid Modern Authentication with Outlook for iOS and Android If you are an on-premises customer using Exchange server on TCP 443, allow network traffic from the following IP ranges: 52.125.128.0/20 52.127.96.0/23 These IP address ranges are also documented in Additional endpoints not included in the Office 365 IP Address and URL Web service. Related topics Modern Authentication configuration requirements for transition from Office 365 dedicated/ITAR to vNext
https://docs.microsoft.com/en-us/microsoft-365/enterprise/configure-exchange-server-for-hybrid-modern-authentication?redirectSourcePath=%252fen-us%252farticle%252fhow-to-configure-exchange-server-on-premises-to-use-hybrid-modern-authentication-cef3044d-d4cb-4586-8e82-ee97bd3b14ad&view=o365-worldwide
CC-MAIN-2022-05
en
refinedweb
Microsoft is previewing F# 5, an upgrade to the company’s open source, “functional-first” language that emphasizes interactive, analytical programming. The preview is available via the .NET 5 Preview SDK or Jupyter Notebooks for .NET. Visual Studio users on Windows will need the .NET 5 preview SDK and Visual Studio Preview. Aligning with improved .NET support in Jupyter notebooks, a number of improvements in F# 5 including language changes were aimed at making the interactive programming experience better overall. More features in this vein are planned for a future preview. New and improved F# features, with the intent of improving interactive programming, include: - Easier package references via the new #r "nuget:..."command. - Enhanced data slicing in three areas: built-in FSharp.Core data types, 3D and 4D arrays in FSharp.Core, and reverse indexes and slicing from the end. - Applicative computation expressions that allow for more efficient computations provided that every computation is independent and results are merely accumulated at the end. When computations are independent of each other, they also are trivially parallelizable. One restriction: Computations are disallowed if they depend on previously computed values. - A new nameoffunction for logging or validating parameters to functions. By using actual F# symbols instead of string literals, refactoring names over time becomes less difficult. - A static class can be opened as if it were a module or namespace. This applies to any static class in .NET or .NET packages, or a developer’s own F#-defined static class. Other features planned for F# 5 include witness passing for trait constraints with respect to quotations. Suggestions for the language will be tracked in a language suggestions repository.
https://thousandrobots.com/microsoft-previews-f-5-infoworld/?amp=1
CC-MAIN-2022-05
en
refinedweb
public class MethodSecurityMetadataSourceAdvisor extends AbstractPointcutAdvisor implements BeanFactoryAware MethodSecurityMetadataSource, used to exclude a MethodSecurityInterceptorfrom public (non-secure) methods. Because the AOP framework caches advice calculations, this is normally faster than just letting the MethodSecurityInterceptor run and find out itself that it has no work to do. This class also allows the use of Spring's DefaultAdvisorAutoProxyCreator, which makes configuration easier than setup a ProxyFactoryBean for each object requiring security. Note that autoproxying is not supported for BeanFactory implementations, as post-processing is automatic only for application contexts. Based on Spring's TransactionAttributeSourceAdvisor. HIGHEST_PRECEDENCE, LOWEST_PRECEDENCE equals, getOrder, hashCode, isPerInstance, setOrder clone, finalize, getClass, notify, notifyAll, toString, wait, wait, wait public MethodSecurityMetadataSourceAdvisor(String adviceBeanName, MethodSecurityMetadataSource attributeSource, String attributeSourceBeanName) adviceBeanName- name of the MethodSecurityInterceptor bean attributeSource- the SecurityMetadataSource (should be the same as the one used on the interceptor) attributeSourceBeanName- the bean name of the attributeSource (required for serialization) public Pointcut getPointcut() getPointcutin interface PointcutAdvisor public org.aopalliance.aop.Advice getAdvice() getAdvicein interface Advisor public void setBeanFactory(BeanFactory beanFactory) throws BeansException setBeanFactoryin interface BeanFactoryAware BeansException
https://docs.spring.io/autorepo/docs/spring-security/4.0.0.RC1/apidocs/org/springframework/security/access/intercept/aopalliance/MethodSecurityMetadataSourceAdvisor.html
CC-MAIN-2019-09
en
refinedweb
public class CumulativePermission extends AbstractPermission Permissionthat is constructed at runtime from other permissions. Methods return this, in order to facilitate method chaining. code, mask RESERVED_OFF, RESERVED_ON, THIRTY_TWO_RESERVED_OFF equals, getMask, hashCode, toString clone, finalize, getClass, notify, notifyAll, wait, wait, wait public CumulativePermission() public CumulativePermission clear(Permission permission) public CumulativePermission clear() public CumulativePermission set(Permission permission) public String getPattern() Permission Stringrepresenting this permission. Implementations are free to format the pattern as they see fit, although under no circumstances may Permission.RESERVED_OFF or Permission.RESERVED_ON be used within the pattern. An exemption is in the case of Permission.RESERVED_OFF which is used to denote a bit that is off (clear). Implementations may also elect to use Permission.RESERVED_ON internally for computation purposes, although this method may not return any String containing Permission.RESERVED_ON. The returned String must be 32 characters in length. This method is only used for user interface and logging purposes. It is not used in any permission calculations. Therefore, duplication of characters within the output is permitted. getPatternin interface Permission getPatternin class AbstractPermission
https://docs.spring.io/autorepo/docs/spring-security/4.0.0.RC1/apidocs/org/springframework/security/acls/domain/CumulativePermission.html
CC-MAIN-2019-09
en
refinedweb
Hi Curtis, I was actually trying to either find a data structure that held all the input fields or create one myself. If I used the method you described below of calling getData() and then getType() and then parsing that, I have to iterate through all the fields for each field in the data file. From the looks of setMaps(), it seems as if the structure "dr[]" that holds Data objects is local to setMaps() and I can't access that from VisadAPI.java when iterating through the array. Do you know if there is a similar global structure? Also, is there a method in BasicSSCell called parse() because I'm not sure how you would parse() a MathType. Since I'm not sure if there is such a global structure, I'm iterating through the Scalar[] array in MappingDialog.java of String names corresponding to input fields and then constructing the RealType from the string name. The reason is that when I "evolve()" the current generation of cells, I'm changing what each field (or "gene") gets mapped to. So I have to iterate through the structure holding the fields to re-assign them after each generation in order to display the new mappings. In the VisadAPI.java attachment, in setCellMapping(width, height), I go through the array of strings called "Scalars[]" which is declared in MappingDialog and my function getScalars() in MappingDialog.java just returns it. I also declared the local variable "mapDialog" in addMapDialog() to be a public class variable in FancySSCell.java so I can access the Scalars array from VisadAPI.java. When I implement the functionality for selecting multiple cells, I'm not going to adjust the rest of the spreadsheet to accomodate this. ie. I'm not going to worry about the formula bar, or the checkbox, or the cut, paste, copy buttons even though they should be changed. I merely want the user to be able to select multiple cells so that these preferred cells can "live" to the next generation. I just need some way of knowing which cells the user likes best and then to feed those particular mappings or cell #'s into my evolution function (in another class). Knowing that, if I were to use the java Vector class to store the cells that are selected at the same time, would changing just the code around CurX, CurY variables make sense? Or do you know of other places that should be included? Thanks, Michelle (VisadAPI.java doesn't compile yet, but I attached it to show how I'm iterating through Scalars[] to see if you think it makes 23, 2003 2:54 PM To: Michelle Kam Subject: RE: specifying fields from input file Hi Michelle, >When I load a data file into a spreadsheet cell, the input fileds such as >latitude, longitude, etc are different per each file. Do you know which class >I can find those fields that appear in the left column of the mapping dialog >box called "Map From"? I'm looking for an array or some data structure that >holds those fields so that I can map each input field to a corresponding >Display type such as CylRadius that shows up in the "Map To" section of the >mapping dialoge box. MappingDialog extracts these ScalarTypes from the data's MathType. Call BasicSSCell.getData to get the Data object, then call getType on the Data, then parse the MathType it returns. MappingDialog does this with a collection of recursive algorithms, but if you know something about the structure of your MathType, you can probably make some assumptions to simplify the process. >I was trying to find a way to select multiple cells at once by pressing >"Control" and selecting the desired cells with the mouse button. Would you >recommend adding more fields in SpreadSheet.java like "CurX2" and "CurY2" to >represent the 2nd cell that was selected by the user? And then add a method >in SpreadSheet.java to handle 2 cells? You'll need a more general solution to do multiple selection. My first thought is to use a Vector to keep a list of which cells are currently selected. Then you'd have to rewrite all the code that affects the selected cell to instead go through that Vector and apply changes to all selected cells. This problem is much trickier than it appears at first, because you have to make some decisions about how things behave when multiple cells are selected. For example, you'd probably need to display "[Multiple cells selected]" or something in the formula bar when a multiple selection exists. If the dimensionalities of the selected cells do not match, you'd need to use a fuzzy gray checkbox in the cell menu to represent that fact. You'd need to arbitrarily pick a cell for adding new data when the user types something in the formula bar. You'd probably have to disable the Cut, Copy and Paste features when a multiple selection exists. Overall, multiple selection support represents significant work, with little to no added functionality or convenience, which is why I never added it. >And when I set up my array holding all the possible "Map To" fields that >show up in the mapping dialog when you run the program, I counted 44 >different fields in the "Map To" section but in MappingDialog.java, there >are 50 different entries for MapNames[][] and MapTypes[][]. Why don't the 6 >extra fields like "Hue" and "Saturation" appear in the dialog box? Hue and Saturation actually *do* appear in the MappingDialog. They are labeled "H" and "S", respectively. However, there are six types that are not represented in that graphic. These are types that were added to VisAD after I designed the MappingDialog graphic, and thus they are not included. Actually, I did redesign the graphic once to incorporate some new types we added, but those latest six were added after the first redesign. -Curtis Attachment: VisadAPI.java Description: Binary data //changed the variable: "mapDialog" from a local variable to a public class variable so getMapsFrom() in //MappingDialog.java can be accessed public MappingDialog mapDialog; //around line 920, public accesor method to get private "maps from" array /* *Gets the "map from" array */ public String[] getMapsFrom() { return Scalars; } visadlist information: visadlist visadarchives:
https://www.unidata.ucar.edu/mailing_lists/archives/visad/2003/msg00682.html
CC-MAIN-2019-09
en
refinedweb
We have some custom MVC controller that dynamically loads different views with view name being specified through action parameter. It was working fine. Now we are going through a website localization project translating most of our UI to another language. We will deploy the same codebase to 2 different websites, and the WEB.CONFIG culture settings would dictate the UI language for each site. We used resources for most of the site localization but for this "dynamic" controller we have decided to use different approach. For each dynamic view (let's say DynamicView.cshtml) we would also create a .RU equivalent like DynamicView.ru.cshtml. We also want it to gracefully fall back to ENGLISH version of the view if RU version was not found. So here is the code: public class DCController : MyControllerBase { public ActionResult Index(string ViewName) { if (Utils.IsEnglish()) return View(ViewName); else { var vru = ViewName + ".ru"; var fv = ViewEngines.Engines.FindView(this.ControllerContext, vru, null); if (fv.View == null) // .ru view was NOT found, so let's search WITHOUT .RU (search for "English" version of this view) return View(ViewName); return View(vru); // How can I use "fv" object here to extract and return desired ViewResult object ????... } } } The code above works as desired. However, what bugs me is that I was not able to use already initialized ViewEngineResult fv variable to extract desired ViewResult to be returned as ActionResult - please see the very last line of code. So the question is how can I extract ViewResult object from ViewEngineResult that was obtained through ViewEngines.Engines.FindView ??? Please advice! I think you're confusing ViewResult/ ActionResult with ViewEngineResult(result of view engine view search with an IView object in it). UPD: return View(fv.View); might be what you're looking for there. Yes I am somewhat confused by definition b/c I've asked for help - but the original question remains unsolved. Let me try to explain it in different words: ViewEngineResult, the variable is called fvin the code above fvinstance to get ViewResultobject (that needs to be returned by controller method) fv.Viewb/c it is an IViewinterface and it cannot be casted to ViewResult return View(vru)wgere vru is just a string, so in theory the view engine would be performing another search based on that string - but we have already performed that search earlier and we even have a valid ViewEngineResult- but I cannot figure out how to use it (see above), so was my question - how can I get ViewResult from ViewEngineResult .... Did not notice you used a string there. You should use return View(fv.View) instead. You can't really cast View to ViewResult, but you can build a ViewResult from a View. return View(fv.View) should do exactly that and there's no duplicated view lookup. Aaahhhh you are SO right :) I was looking into ViewResult constructors, but I should have been looking into Controller.View() overloads (link to MSDN), and sure thanks to your hint here is the one that takes IView as a param View(IView) - Creates a ViewResult object that renders the specified IView object. Now if you could only put your Bitcoin address in your profile ... I want to send $2 your way - I hope you do not mind:) Thanks for helping out :) Thanks for your help! Here's some coffee coins from me :) Not much but in many parts of our planet $2 could buy a lunch. Bitcoin knows no borders, so I hope programmers / students from other countries would eventually join our site, and Bitcoin Tipping Culture amplified through internet would help shifting a global wealth distribution towards a more fare curve with longer and richer tails. I believe we could bring a new dimension to the social interactions on the internet
http://bitexperts.com/Question/Detail/140/get-viewresult-from-viewengineresult-object-that-was-obtained-through
CC-MAIN-2019-09
en
refinedweb
14.3. Resolving dependencies in a directed acyclic graph with a topological a well-known graph algorithm: topological sorting. Let's consider a directed graph describing dependencies between items. For example, in a package manager, before we can install a given package P, we may need to install dependent packages. The set of dependencies forms a directed graph. With topological sorting, the package manager can resolve the dependencies and find the right installation order of the packages. Topological sorting has many other applications. Here, we will illustrate this notion on real data from the JavaScript package manager npm. We will find the installation order of the required packages for the react JavaScript package. How to do it... 1. We import a few packages: import io import json import requests import numpy as np import networkx as nx import matplotlib.pyplot as plt %matplotlib inline 2. We download the dataset (a GraphML file stored on GitHub, that we created using a script at) and we load it with the NetworkX function read_graphml(): url = ('' 'cookbook-2nd-data/blob/master/' 'react.graphml?raw=true') f = io.BytesIO(requests.get(url).content) graph = nx.read_graphml(f) 3. The graph is a directed graph ( DiGraph) with few nodes and edges: graph <networkx.classes.digraph.DiGraph at 0x7f69ac6dfdd8> len(graph.nodes), len(graph.edges) (16, 20) 4. Let's draw this graph: fig, ax = plt.subplots(1, 1, figsize=(8, 8)) nx.draw_networkx(graph, ax=ax, font_size=10) ax.set_axis_off() 5. A topological sort only exists when the graph is a directed acyclic graph (DAG). This means that there is no cycle in the graph, that is, no circular dependency. Is our graph a DAG? Let's see: nx.is_directed_acyclic_graph(graph) True 6. We can perform the topological sort, thereby obtaining a linear installation order satisfying all dependencies: ts = list(nx.topological_sort(graph)) ts ['react', 'prop-types', 'fbjs', 'ua-parser-js', 'setimmediate', 'promise', 'asap', 'object-assign', 'loose-envify', 'js-tokens', 'isomorphic-fetch', 'whatwg-fetch', 'node-fetch', 'is-stream', 'encoding', 'core-js'] Since we used the convention that A directs to B if B needs to be installed before A (A depends on B), the installation order is the reversed order here. 7. Finally, we draw our graph with a shell layout algorithm, and we display the dependence order using the node colors (darker nodes need to be installed before lighter ones): # Each node's color is the index of the node in the # topological sort. colors = [ts.index(node) for node in graph.nodes] nx.draw_shell(graph, node_color=colors, cmap=plt.cm.Blues, font_size=8, width=.5 ) How it works... We used the following code (adapted from) to obtain the dependency graph of the react npm package: from lxml.html import fromstring import cssselect # Need to do: pip install cssselect from requests.packages import urllib3 urllib3.disable_warnings() fetched_packages = set() def import_package_dependencies(graph, pkg_name, max_depth=3, depth=0): if pkg_name in fetched_packages: return if depth > max_depth: return fetched_packages.add(pkg_name) url = f'{pkg_name}' response = requests.get(url, verify=False) doc = fromstring(response.content) graph.add_node(pkg_name) for h3 in doc.cssselect('h3'): content = h3.text_content() if content.startswith('Dependencies'): for dep in h3.getnext().cssselect('a'): dep_name = dep.text_content() print('-' * depth * 2, dep_name) graph.add_node(dep_name) graph.add_edge(pkg_name, dep_name) import_package_dependencies( graph, dep_name, depth=depth + 1 ) graph = nx.DiGraph() import_package_dependencies(graph, 'react') nx.write_graphml(graph, 'react.graphml') You can use that code to obtain the dependency graph of any other npm package. The script may take a few minutes to complete. There's more... Directed acyclic graphs are found in many applications. They can represent causal relations, influence diagrams, dependencies, and other concepts. For example, the version history of a distributed revision control system such as Git is described with a DAG. Topological sorting is useful in any scheduling task in general (project management and instruction scheduling). Here are a few references: - Directed acyclic graphs on NetworkX, at - Topological sort documentation on NetworkX, available at - Topological sorting on Wikipedia, available at - Directed acyclic graphs, described at
https://ipython-books.github.io/143-resolving-dependencies-in-a-directed-acyclic-graph-with-a-topological-sort/
CC-MAIN-2019-09
en
refinedweb
9.3. Fitting a function to data with nonlinear least squares show an application of numerical optimization to nonlinear least squares curve fitting. The goal is to fit a function, depending on several parameters, to data points. In contrast to the linear least squares method, this function does not have to be linear in those parameters. We will illustrate this method on artificial data. How to do it... 1. Let's import the usual libraries: import numpy as np import scipy.optimize as opt import matplotlib.pyplot as plt %matplotlib inline 2. We define a logistic function with four parameters: def f(x, a, b, c, d): return a / (1. + np.exp(-c * (x - d))) + b 3. Let's define four random parameters: a, c = np.random.exponential(size=2) b, d = np.random.randn(2) 4. Now, we generate random data points by using the sigmoid function and adding a bit of noise: n = 100 x = np.linspace(-10., 10., n) y_model = f(x, a, b, c, d) y = y_model + a * .2 * np.random.randn(n) 5. Here is a plot of the data points, with the particular sigmoid used for their generation (in dashed black): fig, ax = plt.subplots(1, 1, figsize=(6, 4)) ax.plot(x, y_model, '--k') ax.plot(x, y, 'o') 6. We now assume that we only have access to the data points and not the underlying generative function. These points could have been obtained during an experiment. By looking at the data, the points appear to approximately follow a sigmoid, so we may want to try to fit such a curve to the points. That's what curve fitting is about. SciPy's curve_fit() function allows us to fit a curve defined by an arbitrary Python function to the data: (a_, b_, c_, d_), _ = opt.curve_fit(f, x, y) 7. Now, let's take a look at the fitted sigmoid curve: y_fit = f(x, a_, b_, c_, d_) fig, ax = plt.subplots(1, 1, figsize=(6, 4)) ax.plot(x, y_model, '--k') ax.plot(x, y, 'o') ax.plot(x, y_fit, '-') The fitted sigmoid appears to be reasonably close to the original sigmoid used for data generation. How it works... In SciPy, nonlinear least squares curve fitting works by minimizing the following cost function: Here, \(\beta\) is the vector of parameters (in our example, \(\beta =(a,b,c,d)\)). Nonlinear least squares is really similar to linear least squares for linear regression. Whereas the function \(f\) is linear in the parameters with the linear least squares method, it is not linear here. Therefore, the minimization of \(S(\beta)\) cannot be done analytically by solving the derivative of \(S\) with respect to \(\beta\). SciPy implements an iterative method called the Levenberg-Marquardt algorithm (an extension of the Gauss-Newton algorithm). There's more... Here are further references: - Reference documentation of curvefit available at - Nonlinear least squares on Wikipedia, available at - Levenberg-Marquardt algorithm on Wikipedia, available at See also - Minimizing a mathematical function
https://ipython-books.github.io/93-fitting-a-function-to-data-with-nonlinear-least-squares/
CC-MAIN-2019-09
en
refinedweb
Seam-spring bean injection not workingAnton Lisovenko Jan 5, 2012 6:45 AM Hi. I'm trying to use spring beans as cdi ones with help of seam-spring module. I managed to do this using appContext injection, but failed to inject the bean itself.: @ApplicationScoped public class SpringCdiProducer { @Produces @SpringContext @Web ApplicationContext context; @Produces @SpringBean RoleFactory roleFactory; } in my bean: @Inject @SpringContext @Web transient private ApplicationContext appContext; @Inject transient private RoleFactory roleFactory; ... public void someMethod() { this.role = roleFactory.getRole(roleName); // doesn't work this.role = appContext.getBean(RoleFactory.class).getRole(roleName); // works } I get the following exception: org.jboss.weld.exceptions.IllegalArgumentException: WELD-001324 Argument bean must not be null at org.jboss.weld.manager.BeanManagerImpl.getReference(BeanManagerImpl.java:714) at org.jboss.seam.spring.extension.SpringBeanProducer.produce(SpringBeanProducer.java:39) at org.jboss.weld.bean.AbstractProducerBean.create(AbstractProducerBean.java:361) at org.jboss.weld.context.unbound.DependentContextImpl.get(DependentContextImpl.java:67) at org.jboss.weld.manager.BeanManagerImpl.getReference(BeanManagerImpl.java:690) at org.jboss.weld.manager.BeanManagerImpl.getReference(BeanManagerImpl.java:772) at org.jboss.weld.injection.FieldInjectionPoint.inject(FieldInjectionPoint.java:138) at org.jboss.weld.util.Beans.injectBoundFields(Beans.java:872) at org.jboss.weld.util.Beans.injectFieldsAndInitializers(Beans.java:884) at org.jboss.weld.bean.ManagedBean$ManagedBeanInjectionTarget$1$1.proceed(ManagedBean.java:182) at org.glassfish.weld.services.InjectionServicesImpl.aroundInject(InjectionServicesImpl.java:134) Seems the default app context is not found, when I inject the spring bean. I thought that spring beans from spring web context should be discovered automatically with no manual actions. Thanks. 1. Re: Seam-spring bean injection not workingMarius Bogoevici Jan 7, 2012 2:49 PM (in response to Anton Lisovenko) Hi Anton, At a first glance this looks valid. The fact that it does not work makes me want to look deeper into the issue, looks like a possible bug. Spring beans from the web context can be made available via @Produces @SpringBean, but I understand that this is exactly the part that does not work. I plan on providing automatic auto-importing, auto-vetoing for Spring beans coming from a web context (i.e. skipping the @Produces @SpringBean part) but that is still WIP, as a more elaborate solution is required due to Servlet and CDI container lifecycle issues. Cheers, Marius 2. Re: Seam-spring bean injection not workingAnton Lisovenko Jan 9, 2012 1:55 PM (in response to Anton Lisovenko) Marius, May be this will help. In web.xml I use standard configuration for spring initialization: <context-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/spring/applicationContext*.xml</param-value> </context-param> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> and WEB-INF/spring contains three app files: applicationContext.xml, applicationContext-roles.xml, applicationContext-security.xml and use SpringCdiProducer as mentioned above. Now I use injected ApplicationContext and it works ok. Thanks to this module now I can use Spring from CDI directly without touching FacesContext (FacesContextUtils.getWebApplicationContext(..)). This helps for unit testing business logics very much. 3. Re: Seam-spring bean injection not workingMarius Bogoevici Jan 13, 2012 4:04 PM (in response to Anton Lisovenko) Anton, Thanks for the update and the interest in the Seam Spring module. I haven't been able to reproduce this issue so I created a basic example from our JBoss AS7 Spring archetype and added the Seam Spring module on top of it as a simple example of how this should work. You can find it here: (use the 'glassfish' branch). You can see the CDI extension working when invoking the SimpleServlet (via $URL/servlet?user=jdoe), as classes in the org.jboss.seam.spring.example.cdi package are all CDI beans (reusing the Spring configured UserDao). I hope that this helps moving the discussion forward, please take a look and maybe you can spot the difference between your case and this. Please let me know how that worked. Cheers, Marius 4. Re: Seam-spring bean injection not workinggauthier Peel Nov 15, 2012 12:01 PM (in response to Marius Bogoevici) Marius if you happen to have worked on this improvement, i would really be insterested : . I plan on providing automatic auto-importing, auto-vetoing for Spring beans coming from a web context (i.e. skipping the @Produces @SpringBean part) but that is still WIP, as a more elaborate solution is required due to Servlet and CDI container lifecycle issues. BTW what does WIP means ? rgds Gauthier
https://developer.jboss.org/thread/178990
CC-MAIN-2019-09
en
refinedweb
debugmode¶ Guide¶ The DebugMode evaluation mode includes a number of self-checks and assertions that can help to diagnose several kinds of programmer errors that can lead to incorrect output. It is much slower to evaluate a function or method with DebugMode than it would be in 'FAST_RUN' or even 'FAST_COMPILE'. We recommended you use DebugMode during development, but not when you launch 1000 processes on a cluster. DebugMode can be used as follows: import theano from theano import tensor from theano.compile.debugmode import DebugMode x = tensor.dscalar('x') f = theano.function([x], 10*x, mode='DebugMode') f(5) f(0) f(7) It can also be used by setting the configuration variable config.mode. It can also be used by passing a DebugMode instance as the mode, as in >>> f = theano.function([x], 10*x, mode=DebugMode(check_c_code=False)) compile.DebugMode rather than the keyword DebugMode you can configure its behaviour via constructor arguments. Reference¶ - class theano.compile.debugmode. DebugMode(Mode)[source]¶ Evaluation Mode that detects internal theano errors. This mode catches several kinds of internal error: - inconsistent outputs when calling the same Op twice with the same inputs, for instance if c_code and perform implementations, are inconsistent, or in case of incorrect handling of output memory (see BadThunkOutput) - a variable replacing another when their runtime values don’t match. This is a symptom of an incorrect optimization step, or faulty Op implementation (raises BadOptimization) - stochastic optimization ordering (raises StochasticOrder) - incomplete destroy_map specification (raises BadDestroyMap) - an op that returns an illegal value not matching the output Variable Type (raises InvalidValueError) Each of these exceptions inherits from the more generic DebugModeError. If there are no internal errors, this mode behaves like FAST_RUN or FAST_COMPILE, but takes a little longer and uses more memory. If there are internal errors, this mode will raise an DebugModeError exception. stability_patience = config.DebugMode.patience When checking for the stability of optimization, recompile the graph this many times. Default 10. check_c_code = config.DebugMode.check_c Should we evaluate (and check) the c_code implementations? True-> yes, False-> no. Default yes. check_py_code = config.DebugMode.check_py Should we evaluate (and check) the perform implementations? True-> yes, False-> no. Default yes. check_isfinite = config.DebugMode.check_finite Should we check for (and complain about) NaN/ Infndarray elements? True-> yes, False-> no. Default yes. require_matching_strides = config.DebugMode.check_strides Check for (and complain about) Ops whose python and C outputs are ndarrays with different strides. (This can catch bugs, but is generally overly strict.) 0 -> no check, 1 -> warn, 2 -> err. Default warn. __init__(self, optimizer='fast_run', stability_patience=None, check_c_code=None, check_py_code=None, check_isfinite=None, require_matching_strides=None, linker=None)[source]¶ Initialize member variables. If any of these arguments (except optimizer) is not None, it overrides the class default. The linker arguments is not used. It is set there to allow Mode.requiring() and some other functions to work with DebugMode too. The keyword version of DebugMode (which you get by using mode='DebugMode) is quite strict, and can raise several different Exception types. There following are DebugMode exceptions you might encounter: - class theano.compile.debugmode. DebugModeError(Exception)[source]¶ This is a generic error. All the other exceptions inherit from this one. This error is typically not raised directly. However, you can use except DebugModeError: ...to catch any of the more specific types of Exception. - class theano.compile.debugmode. BadThunkOutput(DebugModeError)[source]¶ This exception means that different calls to the same Op with the same inputs did not compute the same thing like they were supposed to. For instance, it can happen if the python ( perform) and c ( c_code) implementations of the Op are inconsistent (the problem might be a bug in either performor c_code(or both)). It can also happen if performor c_codedoes not handle correctly output memory that has been preallocated (for instance, if it did not clear the memory before accumulating into it, or if it assumed the memory layout was C-contiguous even if it is not). - class theano.compile.debugmode. BadOptimization(DebugModeError)[source]¶ This exception indicates that an Optimization replaced one variable (say V1) with another one (say V2) but at runtime, the values for V1 and V2 were different. This is something that optimizations are not supposed to do. It can be tricky to identify the one-true-cause of an optimization error, but this exception provides a lot of guidance. Most of the time, the exception object will indicate which optimization was at fault. The exception object also contains information such as a snapshot of the before/after graph where the optimization introduced the error. - class theano.compile.debugmode. BadDestroyMap(DebugModeError)[source]¶ This happens when an Op’s perform()or c_code()modifies an input that it wasn’t supposed to. If either the performor c_codeimplementation of an Op might modify any input, it has to advertise that fact via the destroy_mapattribute. For detailed documentation on the destroy_mapattribute, see Inplace operations. - class theano.compile.debugmode. BadViewMap(DebugModeError)[source]¶ This happens when an Op’s perform() or c_code() creates an alias or alias-like dependency between an input and an output... and it didn’t warn the optimization system via the view_mapattribute. For detailed documentation on the view_mapattribute, see Views. - class theano.compile.debugmode. StochasticOrder(DebugModeError)[source]¶ This happens when an optimization does not perform the same graph operations in the same order when run several times in a row. This can happen if any steps are ordered by id(object)somehow, such as via the default object hash function. A Stochastic optimization invalidates the pattern of work whereby we debug in DebugMode and then run the full-size jobs in FAST_RUN. - class theano.compile.debugmode. InvalidValueError(DebugModeError)[source]¶ This happens when some Op’s performor c_codeimplementation computes an output that is invalid with respect to the type of the corresponding output variable. Like if it returned a complex-valued ndarray for a dscalarType. This can also be triggered when floating-point values such as NaN and Inf are introduced into the computations. It indicates which Op created the first NaN. These floating-point values can be allowed by passing the check_isfinite=Falseargument to DebugMode.
http://deeplearning.net/software/theano/library/compile/debugmode.html
CC-MAIN-2019-09
en
refinedweb
Wheat in box. Meaning of dream and numbers. Find out what it means to dream wheat in box. The interpretations and numbers of the Neapolitan cabala. wheat 18 Meaning of the dream: prosperity whole wheat 83 threshing wheat 68 Interpretation of the dream: profitable business wheat starch 6 Translation: support from friends amass wheat 36 Dream description: benevolent people famine wheat 77 Meaning: deceptions hidden sheaf of wheat 9 Translation of the dream: prosperity and well-being mow wheat 17 Interpretation: good prospects wheat flour 24 Sense of the dream: abundance and wealth green wheat 64 What does it mean: remarkable progress wheat sheaves 42 Meaning of the dream: prosperity and success wheat threshing 15 Description: affections safe ground wheat 34 Interpretation of the dream: bright future wheat, grain 68 Translation: money draw wheat from something 46 Dream description: disease harvest wheat 4 Meaning: Progress slow but sure wheat crop 10 Translation of the dream: progress with work cut wheat 67 Interpretation: prosperity and well-being bag of wheat 41 Sense of the dream: greatest hits plant wheat 25 What does it mean: family welfare thresh wheat 25 Meaning of the dream: great fortune baked with wheat 47 wheat riddled 20 uncooked wheat 84 wheat plant it 4 wheat planting 54 wheat Sicilian 19 wheat board 76 import wheat 57 bagged wheat 36 mills wheat 69 ear of wheat 22 wheat field 42 wheat battering 16 cleanse wheat 66 wheat screenings 42 miller with wheat 76 wheat with Straight Shank 23 Interpretation: money or angling to profit eat bread wheat 15 Sense of the dream: loss to the poor; the rich gain wheat grow in the ears 3 What does it mean: good omen box 8 Meaning of the dream: you will be left last box with jewelry 4 Description: votes fulfilled prompter's box 36 Interpretation of the dream: ephemeral achievements phone box 27 Translation: surprises in the work box of candy 57 Dream description: concerns falling broken box 71 Meaning: wrong impressions box for alms 47 Translation of the dream: changes tiring iron box 12 Interpretation: daring proposals wooden box 70 Sense of the dream: extravagant actions box full 27 What does it mean: foreign relations empty box 32 Meaning of the dream: general relaxation litter box 6 Description: new ideas wooden little box 70 Interpretation of the dream: new ideas work box 15 Translation: crisis in the family box with linen 35 Dream description: enforcer box with flour 24 Meaning: prosperity and gains match in the box 20 Translation of the dream: proposals devious compass in the box 88 Interpretation: business meetings lid of a box 43 Sense of the dream: submission to the will of others box horn 85 What does it mean: good social skills dates in box 32 Meaning of the dream: heightened sense of adventure box office 59 Description: enmities dangerous alms box 75 Interpretation of the dream: receipts that are slow sentry-box 81 Translation: happy events box of matches 58 Dream description: impulsive instincts box of pads 18 Meaning: selfish concerns box of pens 4 Translation of the dream: reckless appreciations money box 88 Interpretation: work activities carton box 50 Sense of the dream: reckless speculation silver box 5 What does it mean: uncertain business gold box 11 Meaning of the dream: new friendships ivory box 35 Description: winning the game glass box 32 Interpretation of the dream: secret relations cigarettes in the box 90 Translation: important negotiations seal a box 20 Dream description: new friends nougat in box 16 Meaning: unusual happenings ballot box 72 Translation of the dream: excessive dynamism streaky in box 36 Interpretation: challenging tasks tin box 78 junction box 28 metal box 54 gift box 76 safe-deposit box 72 Interpretation of the dream: lack of freedom sentry box empty 3 Translation: happy events sentry box with sentinel 24 Dream description: good job performance
https://www.lasmorfianapoletana.com/en/meaning-of-dreams/?src=wheat+in+box
CC-MAIN-2019-09
en
refinedweb
HTTP client¶ The first code example is the simplest thing you can do with the cpp-netlib. The application is a simple HTTP client, which can be found in the subdirectory libs/network/example/http_client.cpp. All this example doing is creating and sending an HTTP request to a server and printing the response body. The code¶ Without further ado, the code to do this is as follows: #include <boost/network/protocol/http/client.hpp> #include <iostream> int main(int argc, char *argv[]) { using namespace boost::network; if (argc != 2) { std::cout << "Usage: " << argv[0] << " [url]" << std::endl; return 1; } http::client client; http::client::request request(argv[1]); request << header("Connection", "close"); http::client::response response = client.get(request); std::cout << body(response) << std::endl; return 0; } Running the example¶ You can then run this to get the Boost website: $ cd ~/cpp-netlib-build $ make http_client $ ./example/http_client Note The instructions for all these examples assume that cpp-netlib is build outside the source tree, according to CMake conventions. For the sake of consistency we assume that this is in the ~/cpp-netlib-build directory. Diving into the code¶ Since this is the first example, each line will be presented and explained in detail. #include <boost/network/protocol/http/client.hpp> All the code needed for the HTTP client resides in this header. http::client client; First we create a client object. The client abstracts all the connection and protocol logic. The default HTTP client is version 1.1, as specified in RFC 2616. http::client::request request(argv[1]); Next, we create a request object, with a URI string passed as a constructor argument. request << header("Connection", "close"); cpp-netlib makes use of stream syntax and directives to allow developers to build complex message structures with greater flexibility and clarity. Here, we add the HTTP header “Connection: close” to the request in order to signal that the connection will be closed after the request has completed. http::client::response response = client.get(request); Once we’ve built the request, we then make an HTTP GET request throught the http::client from which an http::response is returned. http::client supports all common HTTP methods: GET, POST, HEAD, DELETE. std::cout << body(response) << std::endl; Finally, though we don’t do any error checking, the response body is printed to the console using the body directive. That’s all there is to the HTTP client. In fact, it’s possible to compress this to a single line: std::cout << body(http::client().get(http::request(""))); The next example will introduce the uri class.
https://cpp-netlib.org/0.12.0/examples/http/http_client.html
CC-MAIN-2019-09
en
refinedweb
AWS Identity and Access Management AWS Identity and Access Management (IAM) enables you to securely control access to Amazon AWS services and resources for your users. With IAM, you can create and manage Amazon AWS users and groups and use permissions to allow and deny their access to Amazon AWS resources. In some scenarios, you may also want to use AWS Security Token Service, which lets you grant a trusted user temporary, limited access to your Amazon AWS IAM is unique in the following ways: Users and credentials—In the Beijing and Ningxia Regions, there is no concept of "root" or "account" user or credentials. All users are IAM users, including the user who created the account. No hardware or SMS MFA—The Beijing and Ningxia Regions do not support using hardware or SMS multi-factor authentication (MFA) for IAM users. The Regions do support using virtual MFA. For information about using virtual MFA, see Enabling a Virtual Multi-factor Authentication (MFA) Device. Web identity federation—Some web identity providers (for example, social media platforms) might not be available in the Beijing and Ningxia Regions. AWS API Signature Version—Services available in the Beijing and Ningxia Regions support only signature version 4 signing. Policy names—IAM policy names can contain only the following Unicode characters: horizontal tab (x09), linefeed (x0A), carriage return (x0D), and characters in the range x20 to xFF. In practice, this means that you can't use Chinese characters in policy names. Service principals in policies—In IAM policies where the principal is a service (for example, Amazon EC2), the service principal name has the following format, with "service" replaced by the name of the service: service.amazonaws.com.cn For more information, see Creating a Role to Delegate Permissions to an AWS Service and Using an IAM Role to Grant Permissions to Applications Running on Amazon EC2 Instances. SSH public keys—SSH public keys are used only in conjunction with CodeCommit, which is currently not available in the Beijing and Ningxia Regions. In the Beijing Region, the Amazon Resource Name (ARN) syntax includes the aws-cnpartition for resources in the Region. For example: arn:aws-cn:iam::123456789012:user/division_abc/subdivision_xyz/Zhang. In the Ningxia Region, the Amazon Resource Name (ARN) syntax includes the aws-cnpartition for resources in the Region. For example: arn:aws-cn:iam::123456789012:user/division_abc/subdivision_xyz/Wang. Switching Roles - In the Beijing Region and Ningxia Region, after you switch to a role in the AWS Management Console, you cannot switch back to your original credentials. Instead, you must sign out and then sign in again using the desired credentials. Only some AWS services support service-linked roles in the Beijing and Ningxia Regions. For information about which services support using service-linked roles, see AWS Services That Work with IAM and look for the services that have Yes in the Service-Linked Role column. To learn whether the service supports service-linked roles in a specific region, choose the Yes link to view the service-linked role documentation for that service. Data about when an IAM identity (user, group, or role) last attempted to access an AWS service through permissions granted by an IAM policy is not available. For more information, see Service Last Accessed Data..
http://docs.amazonaws.cn/en_us/aws/latest/userguide/iam.html
CC-MAIN-2019-09
en
refinedweb
Java applet programming is more than just running it. you can do much more with java applets, like parameter passing and running in browser, and much more, learn in this tutorial In the last article We told you to do some practice and observation with the applet window code we provided, We hope you already practiced that. However in this article we are going to tell you about more on applets and we will also be creating a small animation using Java applet and threads. If you have not read the introduction article please read it here, The Java Applet - first step to web application programming. Let us show you the working of the applet we created in last post. In the images below you will see what happens when we do what. when first you run the applet using applet viewer it will start the applet viewer window and execute the init and start messages and hence prints the message on the console like image shows Then we minimized the window of applet to see what happens, now you see that the stop method is called when we again maximize/restore the window of applet it will again move to start stage. And finally when we closed the window it will call the stop followed by destroyed method. Hope you now properly understand the working of an applet. Now its time to show you how an applet looks into the web browser. For this purpose we are going to do little bit of change into our precious applet and of course these changes are much more than nothing. First let us change our original applet file. And because we are going to run this applet on the browser we will not need the applet tag in the Java file so we are going to remove it. instead we are creating the HTML file which will invoke the applet. <html> <body> <applet code="FirstApplet" height="400" width="400"></applet> </body> </html> and the Java file looks like this with the addition of a method paint(Graphics g), which takes Graphics type of argument. The Graphics is found in the java.awt package so we also need to import this package or the class we are using. We need graphics because we want to draw the string on the applet as the browser has not consult with the console in personal. import java.applet.Applet; import java.awt.*; public class FirstApplet extends Applet{ public void init(){ } public void start(){ } public void stop(){ } public void destroy(){ } public void paint(Graphics g){ g.drawString("Hello World from Applet!", 20, 100); } } When you will call your html file with the applet tag in it, you will be asked for permission to run this applet, as we told you already that the applet needs permission to be executed as otherwise it may be cause security breach. You might be wondering what is the paint() method all about. We need paint method to paint something on the applet, we are in this case painting a simple string message, using the method drawString(String message, int x-axis, int y-axis);, this will draw the message on the specified position, where x-axis is the distance from left and -axis is distance from top. Image below will show the IcedTea web plug-in asking permission to run the applet, and second to this is the applet itself on the browser. Note: the iced tea is Java plug-in for the browser for Linux for windows the jvm will provide the plugin all you need to do is to allow and activate it. Every time you try to run an applet It will ask for permission. This tutorial was to show you how you can work with browser. From now on we will demonstrate in the applet viewers for easy understanding and debugging. When you are working with applets it is useful to pass the parameters in the applet, you can accomplish this task by using <param> tag from HTML which is used to pass parameters to the applet. In the following example We will show you how you can pass the values using parameters and how you can retrieve them into your program to use them. Also We will show you how you can show the applet status or some custom message in your status bar of applet. import java.applet.Applet; import java.awt.*; public class FirstApplet extends Applet{ public void init(){ } public void start(){ } public void stop(){ } public void destroy(){ } public void paint(Graphics g){ String msg = getParameter("examsmyantra"); String tag = getParameter("tagline"); g.drawString(msg, 20, 100); showStatus(tag); } } /* <applet code="FirstApplet" width="500" height="500"> <param name="examsmyantra" value="" > <param name="tagline" value="Study with Awesomeness" > </applet> */ This program on running results following applet: If you see there is nothing special in this program but only two extra methods and two extra tags in the comment. the <param> tag is used to pass the parameter, the format is: <PARAM name="name-of-parameter" value="value-of-parameter" > when you want to access these parameters in your program you will use the getParameter(String name-of-parameter), and corresponding to the parameter name it will get you the value. In the program above we have done the same, we passed two parameters and in the program we fetched these two parameters and used them as message to display, however one is done through the drawString and one is shown in the status bar using the showStatus() method. If you have come through all the content above you would not want to miss this part, this will teach you how to create the animations with applet and threads. The Applet and Animations.
http://www.examsmyantra.com/article/54/java/java-applet-programming-playing-with-various-utilities
CC-MAIN-2019-09
en
refinedweb
Contents If you want to get more involved in the development of itools, or just to send patches from time to time, there are two things you need to know: Every software project, even the smallest one, will benefit from a Control Version System, and git is probably the best. For the instructions that follow in this chapter to work properly, you will need a recent version of git, 1.5 at least. The latest version of git can be downloaded from their web site: But if you use GNU/Linux, probably your distribution will include it. For example, to install git in a Gentoo [2] system type: $ sudo emerge dev-util/git For Debian [3] or Ubuntu [4] type: $ sudo apt-get install git-core git-doc git-email gitk Once git is installed, you should configure it. The only thing required is to give your full name and your email address: $ git config --global user.name "Luis Belmar-Letelier" $ git config --global user.email "luis@itaapy.com" The parameters set with the command git config are written to the configuration file .gitconfig, in the user’s home folder. It looks like this: [user] name = Luis Belmar-Letelier There are, however, other configuration variables that most people would like to define. Like for example to use colors: $ git config --global color.branch auto $ git config --global color.diff auto $ git config --global color.status auto If you are going to send patches by email, you also should define the variable sendemail.smtpserver: $ git config --global sendemail.smtpserver smtp.my-isp.com For the complete list of configuration variables, check the git config manual page: $ git config --help The user’s name and email address should be defined in the configuration file. But sometimes it may be useful to override this information for a short period of time; that can be done with some environment variables: $ export GIT_AUTHOR_NAME="Luis Belmar-Letelier" $ export GIT_COMMITTER_NAME="Luis Belmar-Letelier" $ export GIT_AUTHOR_EMAIL="luis@itaapy.com" $ cd ~/sandboxes $ git clone git://git.hforge.org/itools.git Initialized empty Git repository in /.../itools/.git/ remote: Counting objects: 22399, done. remote: Compressing objects: 100% (6091/6091), done. ... $ cd itools $ git status # On branch master nothing to commit (working directory clean) To see your local and remote branches use git branch, without and with the option -r respectively: # Local branches $ git branch * master # Remote branches $ git branch -r ... origin/0.15 origin/0.16 origin/0.20 ... origin/HEAD origin/master For now you only have one local branch called master, it is a branch of origin/master. Later we will see how to create new branches. The most basic thing you will want to do is to keep your branch up-to-date. This is done through a two step process, where the first one is to fetch the origin branches: $ git fetch origin ... Fetching refs/heads/0.15 from git://git.hforge.org/itools.git... Fetching refs/heads/0.16 from git://git.hforge.org/itools.git... Fetching refs/heads/0.20 from git://git.hforge.org/itools.git... ... This command updates your copy of the origin branches. Now you can ask what is the difference between your local branch master and the origin master branch: $ git log master..origin commit f4b64a9e49ed9ce66858ccd5461a0ef48a5870af Author: J. David Ibanez <jdavid@itaapy.com> Date: Thu Apr 5 11:57:57 2007 +0200 [xml] No more subclassing the Element class. commit 76698ec4bbea9f27447c2aee71c76af5a510efd9 Author: J. David Ibanez <jdavid@itaapy.com> Date: Wed Apr 4 19:26:13 2007 +0200 [xhtml,html] Now XHTML and HTML elements are the same... The output shows the new patches available (if your code is up-to-date the output will be empty). To synchronise with the trunk, use git rebase: $ git rebase origin First, rewinding head to replay your work on top of it... HEAD is now at f4b64a9... master Fast-forwarded master to origin. Now imagine that you want to work not in the master branch, but in the latest stable branch, 0.60 in this example. To do so you will have to create a new local branch based on 0.60, this is done with the command git branch: $ git branch 0.60 origin/0.60 Branch 0.60 set up to track remote branch refs/remotes/origin/0.60. $ git branch 0.60 * master To switch from one branch to another we use git checkout: $ git checkout 0.60 Switched to branch "0.60" $ git branch * 0.60 master As we have seen before to synchronize your 0.60 branch you will use git fetch and git rebase: # Fetch origin $ git fetch origin # Synchronize $ git checkout 0.60 $ git rebase origin/0.60 Now maybe you want to make some changes to itools. To use as an example, we are going to make some really useless changes: # Edit an existing file $ vi __init__.py ... # Add a new file $ vi USELESS.txt ... What have we done? Use git status to have an overview: $ git status # On branch 0.60 # Changed but not updated: # (use "git add <file>..." to update what will be committed) # # modified: __init__.py # # Untracked files: # (use "git add <file>..." to include in what will be committed) # # USELESS.txt no changes added to commit (use "git add" and/or "git commit -a") One thing the excerpt above shows is how important it is to read the output of the git commands, it will often tell what to do next. Before committing it is a good idea to double check the changes we have done, use git diff for this purpose: $ git diff diff --git a/USELESS.txt b/USELESS.txt new file mode 100644 index 0000000..ddb4b9a --- /dev/null +++ b/USELESS.txt @@ -0,0 +1 @@ +I was here! diff --git a/__init__.py b/__init__.py index 482b002..8a1ea48 100644 --- a/__init__.py +++ b/__init__.py @@ -16,8 +16,14 @@ # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA... +""" +This is itools. Period. +""" + + # Import from itools from utils import get_version, get_abspath +# The version __version__ = get_version(globals()) Now you must tell git what changes you want to commit, for this we use the git add command: $ git add __init__.py $ git add USELESS.txt $ git status # On branch 0.60 # Changes to be committed: # (use "git reset HEAD <file>..." to unstage) # # new file: USELESS.txt # modified: __init__.py # And now we can commit: $ git commit Created commit 612f41c: Add some useless comments. 2 files changed, 7 insertions(+), 0 deletions(-) create mode 100644 USELESS.txt The call to git commit will open your favourite text editor so you can add a sensitive description for your commit. We have seen the use of git add to add a new file or to tell that an existing file has been modified. There are other two commands you will need: To send your patches to be included in the main tree, the first step is always to synchronize: $ git fetch origin $ git rebase origin/0.60 ... If there have been new patches in the origin branch that conflict with your own patches, git rebase will fail, but it will give you instructions on how to address the issue. Read these instructions carefully, solve the conflicts and go ahead. Now you can check the patches you have done with git log: $ git log origin/0.60..0.60 commit 612f41cd3aa3f9dce0f0f54a55e46971d29e5ee8 Author: J. David Ibanez <jdavid@itaapy.com> Date: Wed Jun 27 15:50:45 2007 +0200 Add some useless comments. Everything is alright? Time to build the patches, with git format-patch: $ git format-patch origin/0.60 0001-Add-some-useless-comments.patch This call creates one file for every patch. Now you can send the patches. There are two ways: upload to bugzilla, or send by email. If there is an open issue in bugzilla for the bug or enhancement your patch addresses, it is best to attach the patch to that issue. If there is not, you may want to open one. The following figure shows the bugzilla‘s interface to attach a patch. Bugzilla‘s interface to attach a patch. To send a patch by email use the git send-email command: $ git send-email --to itools@hforge.org \ > 0001-Add-some-useless-comments.patch See the address to send the patches is the itools mailing list. You may also send the patch directly to me jdavid@itaapy.com. See below a summary of the git commands seen in this chapter: git add git branch git checkout git clone git commit git config git diff git fetch git format-patch git log git rebase git mv git rm git send-email git status For details about a command type: $ git <command> --help Footnotes
http://www.hforge.org/docs/git
CC-MAIN-2019-09
en
refinedweb
. Note If an error occurs during serialization of an outgoing reply on the server or the reply operation throws an exception for some other reason, it may not get returned to the client as a fault.. [DataContract] internal class Person { [DataMember] internal string name; [DataMember] internal int age; } To serialize an instance of type Person to JSON Create an instance of the Persontype. Person p = new Person(); p.name = "John"; p.age = 42; Serialize the Personobject to a memory stream using the DataContractJsonSerializer. MemoryStream stream1 = new MemoryStream(); DataContractJsonSerializer ser = new DataContractJsonSerializer(typeof(Person)); Use the WriteObject method to write JSON data to the stream. ser.WriteObject(stream1, p); Show the JSON output. stream1.Position = 0; StreamReader sr = new StreamReader(stream1); Console.Write("JSON form of Person object: "); Console.WriteLine(sr.ReadToEnd()); To deserialize an instance of type Person from JSON Deserialize the JSON-encoded data into a new instance of Personby using the ReadObject method of the DataContractJsonSerializer. stream1.Position = 0; Person p2 = (Person)ser.ReadObject(stream1); Show the results. Console.WriteLine($"Deserialized back, got name={p2.name}, age={p2.age}"); Example // Create a User object and serialize it to a JSON stream. public static string WriteFromObject() { //Create User object. User user = new User("Bob", 42); //Create a stream to serialize the object to. MemoryStream ms = new MemoryStream(); // Serializer the User object to the stream. DataContractJsonSerializer ser = new DataContractJsonSerializer(typeof(User)); ser.WriteObject(ms, user); byte[] json = ms.ToArray(); ms.Close(); return Encoding.UTF8.GetString(json, 0, json.Length); } // Deserialize a JSON stream to a User object. public static User ReadToObject(string json) { User deserializedUser = new User(); MemoryStream ms = new MemoryStream(Encoding.UTF8.GetBytes(json)); DataContractJsonSerializer ser = new DataContractJsonSerializer(deserializedUser.GetType()); deserializedUser = ser.ReadObject(ms) as User; ms.Close(); return deserializedUser; } Note The JSON serializer throws a serialization exception for data contracts that have multiple members with the same name, as shown in the following sample code. [DataContract] public class TestDuplicateDataBase { [DataMember] public int field1 = 123; } [DataContract] public class TestDuplicateDataDerived : TestDuplicateDataBase { [DataMember] public new int field1 = 999; } See also Feedback We'd love to hear your thoughts. Choose the type you'd like to provide: Our feedback system is built on GitHub Issues. Read more on our blog.
https://docs.microsoft.com/en-us/dotnet/framework/wcf/feature-details/how-to-serialize-and-deserialize-json-data
CC-MAIN-2019-09
en
refinedweb
pthread_barrier_destroy (3p) PROLOGThis manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. NAMEpthread_barrier_destroy, pthread_barrier_init — destroy and initialize a barrier object SYNOPSIS #include <pthread.h> int pthread_barrier_destroy(pthread_barrier_t * barrier); int pthread_barrier_init(pthread_barrier_t *restrict barrier, const pthread_barrierattr_t *restrict attr, unsigned count); DESCRIPTIONThe. RETURN VALUEUpon successful completion, these functions shall return zero; otherwise, an error number shall be returned to indicate the error. ERRORSThe pthread_barrier_init() function shall fail if: - EAGAIN - The system lacks the necessary resources to initialize another barrier. - EINVAL - The value specified by count is equal to zero. - ENOMEM - Insufficient memory exists to initialize the barrier.
https://readtheman.io/pages/3p/pthread_barrier_destroy
CC-MAIN-2019-09
en
refinedweb
Advanced Poisson solvers¶ The PoissonSolver with default parameters uses zero boundary conditions on the cell boundaries. This becomes a problem in systems involving large dipole moment, for example (due to, e.g., plasmonic charge oscillation on a nanoparticle). The potential due to the dipole is long-ranged and, thus, the converged potential requires large vacuum sizes. However, in LCAO approach large vacuum size is often unnecessary. Thus, to avoid using large vacuum sizes but get converged potential, one can use two approaches or their combination: 1) use multipole moment corrections or 2) solve Poisson equation on a extended grid. These two approaches are implemented in ExtendedPoissonSolver and ExtraVacuumPoissonSolver. Also regular PoissonSolver in GPAW has the option remove_moment. In any nano-particle plasmonics calculation, it is necessary to use multipole correction. Without corrections more than 10Å of vacuum is required for converged results. Multipole moment corrections¶ The boundary conditions can be improved by adding multipole moment corrections to the density so that the corresponding multipoles of the density vanish. The potential of these corrections is added to the obtained potential. For a description of the method, see [1]. This can be accomplished by following solver: from gpaw.poisson_extended import ExtendedPoissonSolver poissonsolver = ExtendedPoissonSolver(eps=eps, moment_corrections=4) This corrects the 4 first multipole moments, i.e., \(s\), \(p_x\), \(p_y\), and \(p_z\) type multipoles. The range of multipoles can be changed by changing moment_corrections parameter. For example, moment_correction=9 includes in addition to the previous multipoles, also \(d_{xx}\), \(d_{xy}\), \(d_{yy}\), \(d_{yz}\), and \(d_{zz}\) type multipoles. This setting suffices usually for spherical-like metallic nanoparticles, but more complex geometries require inclusion of very high multipoles or, alternatively, a multicenter multipole approach. For this, consider the advanced syntax of the moment_corrections. The previous code snippet is equivalent to: from gpaw.poisson_extended import ExtendedPoissonSolver poissonsolver = ExtendedPoissonSolver(eps=eps, moment_corrections=[{'moms': range(4), 'center': None}]) Here moment_corrections is a list of dictionaries with following keywords: moms specifies the considered multipole moments, e.g., range(4) equals to \(s\), \(p_x\), \(p_y\), and \(p_z\) multipoles, and center specifies the center of the added corrections in atomic units ( None corresponds to the center of the cell). As an example, consider metallic nanoparticle dimer where the nanoparticle centers are at (x1, y1, z1) Å and (x2, y2, z2) Å. In this case, the following settings for the ExtendedPoissonSolver may be tried out: import numpy as np from ase.units import Bohr from gpaw.poisson_extended import ExtendedPoissonSolver moms = range(4) center1 = np.array([x1, y1, z1]) / Bohr center2 = np.array([x2, y2, z2]) / Bohr poissonsolver = ExtendedPoissonSolver(eps=eps, moment_corrections=[{'moms': moms, 'center': center1}, {'moms': moms, 'center': center2}]) When multiple centers are used, the multipole moments are calculated on non-overlapping regions of the calculation cell. Each point in space is associated to its closest center. See Voronoi diagrams for analogous illustration of the partitioning of a plane. Adding extra vacuum to the Poisson grid¶ The multipole correction scheme is not always successful for complex system geometries. For these cases, one can use a separate large grid just for solving the Hartree potential. Such a large grid can be set up by using ExtraVacuumPoissonSolver wrapper:)) This uses the given \(poissonsolver_large\) to solve the Poisson equation on a large grid defined by the number of grid points \(gpts\). The size of the grid is given in the units of the Poisson grid (this is usually the same as the fine grid). If using the FDPoissonSolver, it is important to use grid sizes that are divisible by high powers of 2 to accelerate the multigrid scheme. To speed up the calculation of the Hartree potential on the large grid, one can apply additional coarsening:), coarses=1, poissonsolver_small=PoissonSolver(eps=eps)) The coarses parameter describes how many times the given large grid is coarsed before the \(poissonsolver_large\) is used solve the Poisson equation there. With the given value coarses=1, the grid is coarsed once and the actual calculation grid is of size (128, 128, 128) with the grid spacing twice as large compared to the original one. The obtained coarse potential is used to correct the boundary conditions of the potential calculated on the original small and fine grid by \(poissonsolver_small\). As ExtraVacuumPoissonSolver is wrapper, it can be combined with any PoissonSolver instance. For example, one can define multiple subsequently larger grids via: from gpaw.poisson import PoissonSolver from gpaw.poisson_extravacuum import ExtraVacuumPoissonSolver poissonsolver0 = ExtraVacuumPoissonSolver(gpts=(256, 256, 256), poissonsolver_large=PoissonSolver(eps=eps), coarses=1, poissonsolver_small=PoissonSolver(eps=eps)) poissonsolver = ExtraVacuumPoissonSolver(gpts=(256, 256, 256), poissonsolver_large=poissonsolver0, coarses=1, poissonsolver_small=PoissonSolver(eps=eps)) See poissonsolver.get_description() or the txt output for the corresponding grids.
https://wiki.fysik.dtu.dk/gpaw/documentation/poisson.html
CC-MAIN-2019-09
en
refinedweb
. public class Quadratic { public static void main(String[] args) { double a = 2.3, b = 4, c = 5.6; double root1, root2; double determinant = b * b - 4 * a * c; // condition for real and different roots if(determinant > 0) { root1 = (-b + Math.sqrt(determinant)) / (2 * a); root2 = (-b - Math.sqrt(determinant)) / (2 * a); System.out.format("root1 = %.2f and root2 = %.2f", root1 , root2); } // Condition); } } } calculated roots (either real or complex) are printed on the screen using format() function in Java. The format() function can also be replaced by printf() as: System.out.printf("root1 = root2 = %.2f;", root1); It takes a lot of effort and cost to maintain Programiz. We would be grateful if you support us by either: Disabling AdBlock on Programiz. We do not use intrusive ads. orDonate on Paypal
https://www.programiz.com/java-programming/examples/quadratic-roots-equation
CC-MAIN-2019-09
en
refinedweb
Despite the title, this post is not really about sorting. Instead, the aim here is to inspire the following attitude: when faced with a problem, instead of implementing the first solution that comes to mind, ask as many questions about the problem as you can; very often this will lead to much better solutions. As this post illustrates, this is the case even if the given problem seems totally trivial. Here is a typical programming task: sort an array of $n$ integer values. Any well-educated computer scientist knows multiple solutions to this problem: merge sort, quicksort, heap sort and a lot of other commonly known general-purpose sorting algorithms. However, instead of immediately coming up with a solution to the problem, a better attitude is to ask questions about it first. For instance, one may ask: what do the array values represent? Are they already ordered in some special pattern? You will likely come up with other interesting questions as well if you try it. To clarify the importance of asking those questions, consider the case in which the array values are ages of users of a website. Your goal might be to compute the median user age. One way to do this is by first sorting the array: the median is then either the middle element if the array length is odd or the average of the two center elements if it is even. The sorting part can be obviously done using well-known algorithms with run-time complexity $O(n\log n)$, but since no human being has ever lived for more than 125 years, we can assume that no age value exceeds 124, meaning the sorting part can be done in run-time $O(n)$ as shown below. The following algorithm is implemented in C, but only basic features of the language are used for maximum clarity: void sort_ages(int* ages, const size_t length) { int count[125]; /* initialize all elements of count to zero */ for (int age = 0; age < 125; ++age) { count[age] = 0; } /* count the number of occurrences of each age value */ for (int i = 0; i < length; ++i) { ++count[ages[i]]; } /* sort (rewrite) the ages array in place using count */ int i = 0; for (int age = 0; age < 125; ++age) { while (count[age] > 0) { ages[i] = age; --count[age]; ++i; } } } double median_age(int* ages, const size_t length) { sort_ages(ages, length); int age1 = ages[length / 2]; int age2 = ages[(length-1) / 2]; /* * age1 == age2 if ages has an odd number of elements * because (length-1)/2 == length/2 if length is odd; * in this case, we sum the middle value of ages twice, * so this formula always yields the correct median */ return (double) (age1 + age2) / 2.0; } The algorithm above runs in $O(n)$ time and is based on a simple fact: since age values are constrained to a small range, we can count the number of occurrences of each age value and then write them back in order on the original array ages: this will sort it in place. Computing the median is then a trivial task. This is a significant improvement over algorithms which run in $O(n\log n)$ time. Besides having a better bound on the asymptotic run-time for large values of $n$, the algorithm just shown has a few other important advantages: - each value on the input array ages is read only once and in sequence, so the number of cache misses is minimized when compared to algorithms such as randomized quick-sort - no dynamic memory allocation is done, and only a small and fixed amount of memory is allocated on the stack (the memory usage does not grow with $n$) - the sorting step is executed in a single function (without additional function calls) While it may sound as if we are done with our discussion, the magic is actually not over yet. If you look at the definition of sort_ages, it may occur to you that to compute the median value of the ages array, we don't have to sort it at all! We can instead determine the median directly from count: double median_age2(const int* ages, const size_t length) { int count[125]; /* initialize all elements of count to zero */ for (int age = 0; age < 125; ++age) { count[age] = 0; } /* count the number of occurrences of each age value */ for (int i = 0; i < length; ++i) { ++count[ages[i]]; } /* determine the median age from count */ int i = 0; int age_sum = 0; for (int age = 0; age < 125; ++age) { while (count[age] > 0) { if (i == (length-1)/2) { age_sum += age; } if (i == length/2) { age_sum += age; return (double) age_sum / 2.0; } --count[age]; ++i; } } return 0.0; } The median age computation still runs in $O(n)$ time and has all other advantages from the previous method, but now we do not rewrite the entire ages array to have it sorted, reducing the number of cache misses even further. As a matter of fact, we do not modify ages at all! Instead, since count is an exact and compact representation of the sorted values from ages, we can compute the median age directly from it. As before, when ages has an odd number of elements, we end up summing the middle element of its sorted version twice, but since we divide the sum by two, we get the correct median value at the end. Another small advantage worth mentioning is the fact that median_age2 can naturally handle zero-length arrays, while median_age assumes ages is non-empty. To summarize, asking a few simple questions about our problem led us to jump from a median computation algorithm which was based on sorting to another vastly superior algorithm which actually does not perform any sorting at all.
https://diego.assencio.com/?index=2720885db684284ab33d1ba3b65cab1b
CC-MAIN-2019-09
en
refinedweb
testing Package¶ testing Package¶ Scripts and assert tools related to running unit tests. These scripts also allow running test suites in separate processes and aggregating the results. doctest_tools Module¶ Tools for having doctest and unittest work together more nicely. Eclipse’s PyDev plugin will run your unittest files for you very nicely. The doctest_for_module function allows you to easily run the doctest for a module along side your standard unit tests within Eclipse. traits.testing.doctest_tools. doctest_for_module(module)[source]¶ Create a TestCase from a module’s doctests that will be run by the standard unittest.main(). Example tests/test_foo.py: import unittest import foo from traits.testing.api import doctest_for_module class FooTestCase(unittest.TestCase): ... class FooDocTest(doctest_for_module(foo)): pass if __name__ == "__main__": # This will run and report both FooTestCase and the doctests in # module foo. unittest.main() Alternatively, you can say: FooDocTest = doctest_for_module(foo) instead of: class FooDocTest(doctest_for_module(foo)): pass nose_tools Module¶ Non-standard functions for the ‘nose’ testing framework. trait_assert_tools Module¶ Trait assert mixin class to simplify test implementation for Trait Classes. - class traits.testing.unittest_tools. UnittestTools[source]¶ Bases: object Mixin class to augment the unittest.TestCase class with useful trait related assert methods. assertTraitChanges(obj, trait, count=None, callableObj=None, *args, **kwargs)[source]¶ Assert an object trait changes a given number of times. Assert that the class trait changes exactly count times during execution of the provided function. This method can also be used in a with statement to assert that a class trait has changed during the execution of the code inside the with statement (similar to the assertRaises method). Please note that in that case the context manager returns itself and the user can introspect the information of: - The last event fired by accessing the eventattribute of the returned object. - All the fired events by accessing the eventsattribute of the return object. Note that in the case of chained properties (trait ‘foo’ depends on ‘bar’, which in turn depends on ‘baz’), the order in which the corresponding trait events appear in the eventsattribute is not well-defined, and may depend on dictionary ordering. Example: class MyClass(HasTraits): number = Float(2.0) my_class = MyClass() with self.assertTraitChangesExactly(my_class, 'number', count=1): my_class.number = 3.0 Note - Checking if the provided traitcorresponds to valid traits in the class is not implemented yet. - Using the functional version of the assert method requires the countargument to be given even if it is None. assertTraitDoesNotChange(obj, trait, callableObj=None, *args, **kwargs)[source]¶ Assert an object trait does not change. Assert that the class trait does not change during execution of the provided function. assertMultiTraitChanges(objects, traits_modified, traits_not_modified)[source]¶ Assert that traits on multiple objects do or do not change. This combines some of the functionality of assertTraitChanges and assertTraitDoesNotChange. assertTraitChangesAsync(*args, **kwds)[source]¶ Assert an object trait eventually changes. Context manager used to assert that the given trait changes at least count times within the given timeout, as a result of execution of the body of the corresponding with block. The trait changes are permitted to occur asynchronously. Example usage: with self.assertTraitChangesAsync(my_object, 'SomeEvent', count=4): <do stuff that should cause my_object.SomeEvent to be fired at least 4 times within the next 5 seconds> assertEventuallyTrue(obj, trait, condition, timeout=5.0)[source]¶ Assert that the given condition is eventually true. assertDeprecated(*args, **kwds)[source]¶ Assert that the code inside the with block is deprecated. Intended for testing uses of traits.util.deprecated.deprecated.
http://docs.enthought.com/traits/traits_api_reference/traits.testing.html
CC-MAIN-2017-13
en
refinedweb
Understanding Mule Configuration About XML Configuration Mule uses an XML configuration to define each application, by fully describing the constructs required to run the application. A basic Mule application can use a very simple configuration, for instance: We will examine all the pieces of this configuration in detail below. Note for now that, simple as it is, this is a complete application, and that it’s quite readable: even a brief acquaintance with Mule makes it clear that it copies messages from standard input to standard output. Schema References The syntax of Mule configurations is defined by a set of XML schemas. Each configuration lists the schemas it uses and give the URLs where they are found. The majority of them will be the Mule schemas for the version of Mule being used, but in addition there might be third-party schemas, for instance: Spring schemas, which define the syntax for any Spring elements (such as Spring beans) being used CXF schemas, used to configure web services processed by Mule’s CXF module Every schema referenced in a configuration is defined by two pieces of data: Its namespace, which is a URI Its location, which is a URL The configuration both defines the schema’s namespace URI as an XML namespace and associates the schema’s namespace and location. This is done in the top-level mule element, as we can see in the configuration above: ❶ Shows the core Mule schema’s namespace being defined as the default namespace for the configuration. This is the best default namespace, because so many of the configuration’s elements are part of the core namespace. ❷ Shows the namespace for Mule’s stdio transport, which allows communication using standard I/O, being given the "stdio" prefix. The convention for a Mule module or transport’s schema is to use its name for the prefix. The xsi:schemaLocation attribute associates schemas' namespaces with their locations. ❸ gives the location for the stdio schema and ❹ for the core schema. It is required that a Mule configuration contain these things, because they allow the schemas to be found so that the configuration can be validated against them. Default Values Besides defining the syntax of the elements and attributes that they define, schemas can also define default values for them. Knowing these can be extremely useful in making your configurations readable, since thy won’t have to be cluttered with unnecessary information. Default values can be looked up in the schemas themselves, or in the Mule documentation for the modules and transports that define them. For example, the definitions of the <poll> element, which polls an endpoint repeatedly, contains the following attribute definition: It is only necessary to specify this attribute when overriding the default value of 1 second. Enumerated Values will look at this in more detail in the following sections. Note that, as always, it will be necessary to reference the proper schemas. Spring Beans The simplest use of Spring in a Mule configuration is to define Spring Beans. These beans are placed into: ❶ The vm connector specifies that all of its endpoints use persistent queues. ❷ The file connector specifies that each of its endpoints will be polled once a second, and also the directory that files will be moved to once they are processed. that will be done. ❶ specifies its location and refers to the connector shown above. It uses the generic address attribute to specify its location. The file endpoint at ❷. (❶) If not, the prefix is determined from the element’s address attribute. (❷) The prefix style is preferred, particularly when the location is complex. One of the most important attributes of an endpoint is its message exchange pattern (MEP, for short), ❶ converts the current message to JSON, specifying special handling for the conversion of the org.mule.tck.testmodels.fruit.Orange class. The transformer at ❷ ❶ continues processing of the current message only if it matches the specified pattern. The filter at ❷ continues processing of the current message only if it is an XML document. There are a few special filters that extend the power of the other filters. The first is message-filter: As above, ❶ continues processing of the current message only if it matches the specified pattern. But now any messages that don’t match, rather than being dropped, are sent to a dead letter queue for further processing. ❷. Filters once again can be configured as global elements and referred to where they are used, or configured at their point of use. For more about Mule filters see Using Filters’ll be referring to as we examine its parts: This flow accepts and processes orders. How the flow’s configuration maps to its logic: ❶ A message is read from an HTTP listener. ❷ The message is transformed to a string. ❸ This string is used as a key to look up the list of orders in a database. ❹ The order is now converted to XML. ❺ If the order is not ready to be processed, it is skipped. ❻ The list is optionally logged, for debugging purposes. ❼ Each order in the list is split into a separate message ❽ A message enricher is used to add information to the message ❾ Authorize.net is called to authorize the order ❶❶ The email address in the order is saved for later use. ❶❷ A Java component is called to preprocess the order. ❶❸ Another flow, named processOrder, is called to process the order. ❶❹ The confirmation returned by processOrder is e-mailed to the address in the order. If processing the order caused an exception, the exception strategy at ❶❺ is called: ❶❻ All the message processers in this chain are called to handle the exception ❶❼ First, the message in converted to ma string. ❶❽ Last, this string is put on a queue of errors to be manually processed. MEPs. ❶ This. ❸ This calls a JDBC query, using the current message as a parameter, and replaces the current message with the query’s result. Because this endpoint is request-response, the result of the query becomes the current message. ❶❹. ❶❽ Any orders that were not processed correctly are put on a JMS queue for manual examination. Because this endpoint is one-way (the default for JMS), the current message does not change. Thus the message sent back to the caller will. ❷ The message, which is a byte array, is converted to a string, allowing it to be the key in a database look-up. ❹ The order read from the database is converted to an XML document. ❶❶ The email address is stored in a message property. Note that, unlike most transformers, the message-properties-transformer does not affect the message’s payload, only its properties. ❶❼. ❽ The enricher calls a connector to retrieve information that it stores as a message property. Because the connector is called within an enricher, its return value is processed by the enricher rather than becoming the message. Logger The logger element allows debugging information to be written from the flow. For more about the logger see Logger Component Reference ❻ Each order fetched from the database is output, but only if DEBUG mode is enabled. This means that the flow is silent, but debugging can easily be enabled when required. Filters Filters determine whether a message is processed or not. ❺ If the status of the document fetched is not "ready", its processing is skipped. Routers A router changes the flow of the message. Among other possibilities, it might choose among different message processors, split one message into many, join many messages into one. For more about routers, see Routing Message Processors. ❼: ❶ Causes the two methods preProcessXMLOrder and preProcessTextOrder to become candidates. Mule chooses between them by doing reflection, using the type of the message. ❷ Calls the method whose name is in the message property methodToCall. ❸ Calls the generate method, even though it takes no arguments. Entry point resolvers are for advanced use. Almost all of the time, Mule finds the right method to call without needing special guidance. ❶❷ This is a Java component, specified by its class name, which is called with the current message. In this case, it preprocesses the message. For more about entry point resolvers, see Entry Point Resolver Configuration Reference. Anypoint Connectors An Anypoint connector calls a cloud service. ❾ Calls authorize.net to authorize a credit card purchase, passing it information from the message. For more about connectors, see Anypoint Connectors. Processor Chain A processor chain is a list of message processors, which will be executed in order. It allows you to use more than one processor where a configuration otherwise allows only one, exactly like putting a list of Java statements between curly braces. ❶❻ Performs two steps as part of the exception strategy. It first transforms and then mails the current message.. ❶❸ Calls a flow to process an order that has already been pre-processed and returns a confirmation message. Exception Strategies An exception strategy is called whenever an exception occurs in its scope, much like an exception handler in Java. It can define what to do with any pending transactions and whether the exception is fatal for the flow, as well as logic for handling the exception. ❶❺. As of mule 3.1.1, all are in the pattern namespace as shown. In earlier Mule 3 releases, they are in the core namespace, except for web-service-proxy which is ws:proxy. These older names will continue to work for the Mule 3.1.x releases, but will be removed after that.: ❶ Copies messages from a JMS queue to a JMS topic, using a transaction. ❷ reads byte arrays from an inbound vm endpoint, transforms them to strings, and writes them to an outbound vm endpoint. The responses are strings, which are transformed to byte arrays, and then written to the outbound endpoint.: ❶ Is a simple service that echos requests. ❷ is a simple web service that uses a CXF component. Note how little configuration is required to create them.: ❶ Validates that the payload is of the correct type before calling the order service, using the filter at ❷. will perform well, but if you determine that, for instance, your endpoints are receiving so much traffic that they need additional threads to process all of it, will usually perform will only receive. ❶ and ❷ create the listener beans. ❸ appears to register both beans for both component and endpoint notifications. But since ComponentMessageNotificationLogger only implements the interface for component notifcation, those are all it will receive (and likewise for EndpointMessageNotificationLogger. For more about notifications, see Notifications Configuration Reference. Agents Mule allows you to define Agents to extend the functionality of Mule. Mule will manage the agents' lifecycle (initialize them and start them on startup, and stop them and dispose of them on sutdown). will start it and stopallows JMX to manage Mule’s use of Log4J management:jmx-default-configallows creating all of the above at once management:log4j-notificationscreates an agent that propagates Mule notifications to Log4
https://docs.mulesoft.com/mule-user-guide/v/3.6/understanding-mule-configuration
CC-MAIN-2017-13
en
refinedweb
Symptoms When you try to open a Windows Management Instrumentation (WMI) namespace on a computer that is running Windows Server 2003 Service Pack 2 (SP2), you receive the following error code: Notes 0x80041002 (WBEM_E_NOT_FOUND) - An example of a namespace is the root\cimv2 or root\rosp namespace. - Not all "0x80041002 (WBEM_E_NOT_FOUND)" error codes are caused by this problem. Cause This issue occurs because the WMI repository is corrupted. or a Windows XP Professional x64 Edition Workaround To work around this issue, disable the Resultant Set of Policy (RSoP) logging policy. Status Microsoft has confirmed that this is a problem in the Microsoft products that are listed in the "Applies to" section. More Information For more information about the Resultant Set of Policy (RSoP) Group Policy settings, view the following Microsoft Tech website:For more information about software update terminology, click the following article number to view the article in the Microsoft Knowledge Base: Properties Article ID: 2257980 - Last Review: Aug 12, 2010 - Revision: 1
https://support.microsoft.com/en-us/help/2257980/-0x80041002-wbem-e-not-found-error-code-occurs-when-you-try-to-open-a-wmi-namespace-on-a-computer-that-is-running-windows-server-2003-sp2
CC-MAIN-2017-13
en
refinedweb
experimental anti-spam blacklist. new archive feature, to help with cleanup of large wikis This aims to reduce info-clutter and improve wiki resource usage and performance, in situations where you’re not willing to delete content entirely, eg because of historical interest. To enable this feature, create an “archive” folder within your wiki, or visit SOMEOLDPAGE/archive once, and make sure you have Delete objects permission. An archive button will be added to the page management form. This moves the current page, its subtopics, and their edit history to the archive folder. Pages in the archive may still be viewed, but they are considered read-only and inaccessible to search bots (but this is not yet enforced - adjust permissions on the archive folder to make it so.). zope 2.12 requires a change to the standard_error_message object, otherwise errors will appear as a blank page. Delete yours using the ZMI and recreate it with SOMEPAGE/setupDtmlMethods (or /upgradeAll). to work around incomplete unicode support in Zope, a zwiki page’s __str__ method now returns a utf-8 encoded string. This fixes the ZMI manage_main view on a zwiki page object (#1450). also, you’ll need to set a string property in the properties tab of your wiki or zope root folder: management_page_charset: UTF-8. (The capitalisation may matter in some cases.) This fixes the ZMI manage_main view on a zwiki folder, eg (#1459). however, non-ascii zwiki page content may render incorrectly in the ZMI, and you should not edit zwiki pages in the ZMI, or they may become misencoded. (If it happens, visiting PAGE/fixEncoding or /upgradeAll may fix it.) This is the first release of Zwiki 2, formerly the ZWiki-unstable branch. To recap, on 2009/01/30 I renamed the old Zwiki stable and unstable repos, and we now have: The biggest change in Zwiki 2 is that we now work internally with unicode rather than utf8-encoded text. Encoding-related errors in non-ascii wikis are hard to prevent; this change will help us stamp them out. Also this will permit accurate searching of non-ascii content. Thanks to Wirtschaftsuniversität Wien (Vienna University of Economics and Business) and Willi Langenberger for funding part of this work. There has also been a lot of skins and code cleanup. Upgrade notes With an existing wiki containing non-ascii wiki content, you will probably notice new encoding-related errors. (Eg try searching for some non-ascii text.) In this case you need to convert all wiki content from utf-8 to unicode. To do this, visit SOMEPAGE/upgradeAll as manager. As usual, if you have a lot of content this can take several minutes; set your logging level to BLATHER (in zope.conf) and watch the event.log to see progress and be sure it completes successfully. Afterward, most encoding-related errors should be gone (try the search again). If not: first try upgradeAll a second time, then please report (eg to sm on #zwiki) for investigation. Also, if you have custom skin templates they will likely need updating (see eg the note below about lastEditor). It’s probably best to move them out of the way and deal with that later, or re-apply your customisations to the latest templates. Browsing - show a “created” timestamp at top right, like Michael Haubenwallner’s on the zope3 wiki - 1414 - indentation was not displayed in literal code blocks (betabug) - minor css fixes (betabug) - Fixed the display of non-ascii search expressions on the search form (betabug) Editing - 517 - more robust and careful updating of backlinks during page renames. (Sascha Welter) - 1171 - more than one double parenthesis link in the same paragraph works now. (betabug) - interwiki links now require at least one word character after the : - mail-in no longer requires configuring an external method. Have /etc/aliases post incoming mail to SOMEPAGEURL/mailin. - when there are multiple mailhosts, choose the one with alphabetically first id This lets you use a zope 2.11 “MailHost” when you have a “MaildropHost” still kicking around, may be useful for someone else testing mailhosts. - drop mailin’s unused spam-reporting feature General - Unicodegeddon! A big set of changes switching to unicode for internal text handling. Many were committed as a single mega-patch from the the early unicode branch. Noteworthy details: - for proper operation, wiki content should be converted from utf-8 to unicode, a one-way upgrade step. See upgrade notes above. - skin templates should use (lastEditor(), lastEditTime() etc.) methods, not properties. If you have customised editform, history, or RecentChanges, they may need to be updated accordingly. Also catalog page brains now contain lastEditor, not last_editor. - we use unicode-aware ZCTextIndexes in the wiki catalog by default. SOMEPAGE/upgradeAll (or /setupCatalog) upgrades old TextIndex-based indexes if present. (betabug) - we use a simple unicode-aware splitter, based on code by Stefan Holek. The following code is mostly taken from the UnicodeLexicon product by Stefan H. Holek (ZPL) which can be found at: . Many thanks to you Stefan!. (betabug) - preserve unicode when evaluating DTML (#1411) - Many other unicode-related changes/fixes (simon, betabug) - Skin cleanups: rearrange templates, provide nav links and access keys more consistently, skin machinerys, basic support for alternate filesystem skins. Many of these were committed as a single mega-patch. Noteworthy details: - more css ids - move accesskeys, links and ratingform into pageheader template - move pagemanagementform into a new pagefooter template - add wikifooter and wikiheader - visible header/footer for wiki-contextual views - include either pageheader or wikiheader in every view, for appropriate links, access keys, etc. Show all nav links everywhere. - avoid duplicate access key definitions so you don’t have to press enter in firefox 3 - drop the s for search form access key, so s for save edit works without enter - some skin machinery cleanup and basic support for alternate filesystem skins, selectable by “skin” property or request var - 939 - suppress “e” access key when already in edit form. (betabug) - 1361 - check permission for the edit form more carefully; the edit form should be viewable also in the case that the user has “Add” rights on the wiki. E.g. going through the “add” form on a restricted page, then clicking on the “Preview” button raised Unauthorized before this fix. (betabug) - add /index as an alias for the /index_object page method, drop /updateCatalog - catalog search results now contain lastEditor, not last_editor - more tests (simon, betabug) - code cleanups, documentation (simon) This and previous releases were made from the Zwiki 1 branch, formerly known as Zwiki-stable. A catch-up release with various bugfixes, code cleanups, and newer translations from launchpad. You’ll want this if you use Plone 3.x, to work around a layout glitch with document action icons. Upgrade notes Configuring - /setupDtmlMethods now installs a sitemap.xml, which might reduce load from google and possibly other search bots (Simon Michael, Sascha Welter) - remove the anti-spam 24 hour indexing delay introduced in 0.41, for quicker indexing of actively-edited pages (#1387) - recognise a site_footer attribute, like site_header, useful for eg stats tracking - Change content-type of the SomePage/text (or /src) methods to UTF-8, making the /text view of wiki pages much more useful for non-ascii languages. (Sascha Welter) Browsing - “fix” the plone 3 document actions layout glitch, by hiding them. This release Zwiki should work as well with Plone 3.x as previous versions did with Plone 2.x. - add the wiki navigation links (home, contents, changes..) to all forms - merge access key help with the help page - drop the “discussion” nav link activated by GeneralDiscussion, it is a special case and confusable with per-page discussion - fix a bug preventing mail-out in some Plone versions (#1400) - strip (some) annoying bottom-quoting from mailins (“—-Original message:—- ...”) - support for mailing darcs patches through a zwiki: keep any text/x-darcs-patch part, as well as the first text/plain part - the unused “forward to spam@ address to update banned_links & banned_ips” feature has been dropped. Translations - import latest translations from launchpad - switch to launchpad-only translations process, see - switch to i18ndude for generating pot files General - More functional tests: recentchanges and options (Sascha Welter) - exception handling cleanups (Sascha Welter) - #1272 - create PageBrain only for Zwiki Pages. Since we are now ensuring that there is always a catalog in a Zwiki, the method metadataFor() shouldn’t be needed any more. But I’m still adding this patch (credits and thanks to koegler), in case some code hits on it in the time between an upgrade and running the /upgradeAll method. (Sascha Welter) - Rewrote the rating module to use a BTree to store votes, instead of a dictionary. Saves some kB writing to Data.fs and avoids conflicts. (Sascha Welter) - code review and cleanup. All significant methods in Editing have now been reviewed (by Simon) and tightened up - make lastLog more robust to prevent possible page view breakage - pages now returns nothing without a fully-configured catalog (to avoid blowing up zodb cache when viewing old revisions) - new methods expungeLastEditor, expungeLastEditorEverywhere, feedUrl, setCreatorLike, setLastEditorLike - replaced subscriberCount, wikiSubscriberCount methods with subscriberCount, pageSubscriberCount, wikiSubscriberCount - renamed changes_rss method to edits_rss - make bare page rendering at the debug prompt work again, cleanup - include Makefile in tarball so that folks can see how to run tests - start a developer style guide, some developer docs cleanup. Also, skin customizers should update their custom templates to avoid the same errors. See #1330 below. - Plone 3/CMF 2.1 users should upgrade to this release - Plone 2.x users will not be able to install this release in sites, see #1322 below - skin customizers should note the wiki navigation links are now in the wikipage template - a wiki catalog is no longer optional, and will be added if missing - fixPageEncoding renamed to fixEncoding and fixed - move wiki navigation links to wikipage template for easier customizing - #1041, #1145: ensure a catalog and an outline cache are always present. ‘Z - #1256: handle mailins with encodings other than utf8 - don’t log body of mail-ins by default - #141 - Uploading a file causes an “edit” mailout now. (Sascha Welter) - also allow a ‘Secure Maildrop Host’ for sending mail (Sascha Welter, Encolpe Degoute) ‘j - show accurate page hierarchy after adding a new wiki - #1313, updated FrontPage links in basic wiki template (Frank Laurijssens)) - #1299 - mail out in plone works, could use more error handling though (Sascha Welter) Feeds - #1083 escape html in rss feeds (Sascha Welter) - #1294 - changes to get rss feeds to validate. pages_rss and changes_rss now Color issue links in “backlinks” (Frank Laurijssens) - - remove bad import from CMFInit breaking CMF/Plone installation (#1291, #1295) - /upgradeAll now sets up a catalog and tries to fix some common character encoding problems - use a more stable contents url to reduce bot traffic (#762, #699) (Simon Michael, Daniel Yount, Michael Haubenwallner, Sascha Welter) - handle unrecognised page types properly - numericVotes method to get votes without voter identities - merge Wikis.py into __init__ and Admin Style and help cleanups, bugfixes. Upgrade notes If you use the , you’ll need to update it to work with the new “expandable” css class. Configuring - reorganize stylesheet - add missing mathaction styles - rename “aqtree3clickable” css class to “expandable” Browsing - subtopics, page context, wiki contents styling fixes - make help page a built-in view like recentchanges etc. - help updates - make rss feeds and creationTime()/lastEditTime() more robust, eg when creation_time property is blank General - fix backwards compatibility for old page objects - speed up and quieten down tests a little - developer docs cleanup - “authorstats” make rule. Welcome, Haskell! Plone 2.5 support, LatexWiki and MathAction incorporated, plone tabs cleanup, translation updates, bug fixes. Upgrade notes If you are upgrading, re-install Zwiki in your plone site’s Add/Remove Products to make the metadata tab go away. (See below.) Installing - Wiki pages in Plone (or CMF) will no longer show the metadata tab by default. has more about this. (#1273) - add isDefaultPageInFolder method, fixing plone 2.5.x support (#1283) Configuring - don’t use # in setupTracker’s event log messages, it was confusing Browsing - don’t limit the content area to 80% width in standard skin’s stylesheet - clean up plone/cmf tabs/actions - change backlinks form’s title to Related pages - in the PAGE/myvotes view, link to pages Editing - tweak the warning a user sees when too many external links are added - make restructured text be more silent about errors (report severe errors only) - LaTeX support: add the zwiki.org version of LatexWiki to Zwiki’s standard plugins (Simon Michael, Bob McElrath, Joe Koberg & R. Sean Bowman) - Axiom, Reduce, Maxima, Noweb support: add the zwiki.org version of MathAction to Zwiki’s standard plugins (Simon Michael, Bill Page) - increase max page name length in editform from 100 to 300 for eg latexwiki formula page names Issue tracking - make the add issue form respect the edits_need_username property, closing a spam hole (#1279) Translations - new Breton translation for plone skin (Fulup) - translation updates: - German (Stefan Kose) - Japanese (Masaya Kato) - Russian (Valery Sharapov) General - in embedded DTML, ensure the context object (_.this) can acquire from the folder, as we’d expect. Zope bug ? see collector A couple of bugfixes. Installing - fix installation breakage in vanilla CMF (#1271, koegler, Simon Michael) - fix typo which broke ZMI add zwiki form (#1269) Editing - make standard_error_message create button use createform, not editform (#1275, Peter Merel) Miscellaneous view-related and general enhancements. Upgrade notes CMF/Plone users: the zwiki_standard and zwiki_plone skin layers have been replaced by a single zwiki layer. Re-install Zwiki in your CMF/Plone sites, using Plone’s add/remove products or CMF’s quickinstaller, to register the new skin layer. Also remove the zwiki_standard and zwiki_plone skin layers from your skins in portal_skins -> Properties. Installing - cleaned up ZMI Add ZWiki and Add ZWiki Page forms - remove _getViewFor import that broke with CMF 2.0 - fix upgradeAll’s batch option Browsing - /myvotes view shows your votes in this wiki - highlight your current vote for this page, if any Editing - support a max_identified_links property also, for cookie-identified users (for now) - edit history enhancements, a more useful diff browser and more powerful revert methods. Renames can now be reverted, reverting is more reliable, and appropriate mail notifications are sent. General - replace zwiki_standard and zwiki_plone with just skins/zwiki - Views code cleanup - simplify definition of view macros and make them refresh immediately in debug mode - more code docs - don’t bother identifying the catalog in event log RST+DTML support, alternate heading layout for plone, some useful bugfixes. Configuring - add an old-style compact parents list as an alternative in pageheader.pt, off by default (#1250) - CSS cleanups Browsing - fix action icons layout in plone 2.0 (#1253) - make top-right links small again in standard skin Editing - enable dtml interpretation in rst pagetype (Stefan Rank, Simon Michael) - fix unsubscribe, which was losing all subscribers’ “all edits” setting (#1254) - fix an obscure case where cmf/plone subscriptions could be ignored - don’t add NO_ADDRESS_FOR recipients, they break some MTAs (#1255) Issue tracking - make changeIssueProperties respect relevant permissions and edits_need_username, to protect from spambots (#1260) - be more robust about including full page name in issue creation mails (#1257) General - easier make test-* rules - fix WWML test, which wasn’t testing WWML at all - testsupport docs, use rst not stx in default fixture - don’t html-quote the site header, if configured (#1240) - when creating pages, use epoz based on the new page’s type, not the parent page’s (#1243) -) General - don’t let find objects in the catalog break due to outline (#1246) - make include() convenience method more robust with authentication not reflect the new edits_need_username property until you update or remove your template. If you do not have a catalog configured for your wiki, recent changes will no longer sort properly. You will probably want to add a catalog, by visiting SOMEPAGE/setupCatalog as manager. Purple numbers are no longer supported out of the box. Installing - fix a bad import which broke CMF 1.4/Plone 2.0 support (#1211) Configuring - new ‘edits ‘mail) Minor changes. - show subscriber counts in subscribeform** General - add names from ZwikiContributors to CONTRIBUTORS.txt A new copyright/license policy and contributor list; Plone 2.1 and mail fixes. Upgrade notes You should upgrade to this release if you’re using Plone 2.1. Browsing - make the green border & tabs appear in Plone 2.1, and indeed force it always on, for now (#1187) - make authenticated/unauthenticated subscriptions in CMF/Plone more interchangeable so things just work (#1199, Simon Michael, John Riley) - subscriberList was returning a stray :edits, causing trouble eg in lotus notes (#1197, John Riley, Simon Michael) - also accept Secure Mailhosts, fixing mailout in Plone 2.1 (#1197, Tracy Reed, Simon Michael) General - i18n - minor fix, one line hadn’t been translated (Frank Laurijssens) General - add new contributors agreement & repo policy French and dutch translation updates & several small bugfixes., ‘;’ instead of space) (Stefan Rank) - french translation updates (Encolpe Degoute) - i18n code cleanups (Encolpe Degoute) - Additional translations for Dutch (Frank Laurijssens) Allow limited-depth hierarchy display, translation updates, bugfixes. Upgrade notes If you have some #NNN issue links which are not working, the #1179 bugfix will take effect when you edit that page (or visit PAGE/clearCache). Per-user mailout policy, skins reorganization, plugin architecture enhancements, support for boring pages, wicked link syntax, favicons and separate create form, better logging, many code cleanups & bugfixes.. Installing - fix __implements__ attribute error when CMF is not installed (#1149, Warren) - make Zwiki startup more robust with strange locales/strange pythons (#1158) - make zmi add wiki form use a btreefolder - don’t break when adding a wiki with id home (#591) - drop unnecessary loading plugins message at startup - don’t log a warning at startup when the system locale is None - clarify missing PTS warning at startup - make fit import failure warning at startup less troubling - fix a case where a page upgrade wouldn’t get logged - don’t break when displaying a page whose page type has been uninstalled. - set a true use_double_parenthesis_links property to enable Wicked-style double parenthesis syntax for freeform wiki links. This is for world users who can’t easily type []. - rename and delete no longer check for a username in addition to the permissions (though the skin may still hide them without a username.) Browsing Editing - be sure to create our own recycle_bin even if the parent folder has one - purple numbers now disabled by default - allow create to be called directly by page management form - add “all edits” checkboxes to subscribeform. Zwiki can now send comments to some, edits to others. - reparent now also sends a mailout - remove (new), add (edit) in mailout subjects for clarity - just say “links updated” in rename mailout subject Issue tracking - use more consistent (property change) in issue mailouts, and don’t say it if no property changed - /issuebrowser was showing filter issues form General - skins - big skins reorganization to reduce duplication - use a separate createform skin template for creating pages if present - hide most of the wiki action links by default in standard skin - tweak page management form layout in standard skin - drop unnecessary extra text from page management form in cmf/plone - remove old _ and = access keys from /showAccessKeys - register just the zwiki_standard skin with FilesystemSiteDirectory when installed - italicise log notes in recent changes - if a ‘favicon’ object is present, set it as the page icon in the standard skin - don’t let recentchanges break when a page type has been uninstalled - use the word “page” instead of “subtopic” in the page management form - don’t let non-template objects with the same id hide our skin templates; improve bad template error - remove the h2 bottom border style - remove stray commas in page templates causing log warnings - drop uploaded and wikinav_portlet templates from plone skin General - split content/ into wikis/ and scripts/ - strip html tags in the page summary, and add renderedSummary which does formatting and linking - rename all pagetypes’ renderXIn methods to format - plugins can now be packages or files - plugins can now override any method in the product core - add support for emacs-style “hooks” for customization by plugins Robustness fixes, Plone 2.1 compatibility fixes, simplifications, CMF metadata support. Upgrade notes You might want to delete the “outline” object using the ZMI, allowing Zwiki to create a more ZMI-compliant one. Note this will lose any manual re-ordering of subtopics you may have done. The default colour for “serious” issues is lighter. This will take effect for each issue page as it is edited. To make all pages show the correct colour right away, visit SomePage/upgradeAll in your browser. Installing - make the outline cache replaceable to avoid errors during migration of plone 2.1rc<3 sites (#1143, SM, alecm) - make catalog lookup more robust when there is another object named Catalog (#1132, SM, Tim Olsen) - catch errors when importing page types and plugins at startup (#809, #1148) - make fit import error at startup less verbose (#1054) Configuring - make the outline cache object fully ZMI-manageable (#1144) - drop support for alternate catalog names via SITE_CATALOG property - drop support for overriding the contents view with a SiteMap page - don’t update page creation times when moving/renaming/importing in the ZMI Editing - fix page creation/editing breakage in plone 2.1 due to explicit acquisition (#1137, SM, alecm) - enable CMF/Plone document metadata support (jbb) Issue tracking - make the serious issue default colour lighter - allow issues without numbers (with at least a “status” property) General - i18n - remove unnecessary -en po files - add missing headers for german po - drop unnecessary -PT for pt po files - fix space-separated i18n attributes to avoid deprecation warnings on startup General - skins - make hasSkinTemplate check filesystem templates, such as the alternate subtopics template (#1113, Mark Ferrell, SM) - drop the special style for ratings, for now General - allow create to work without a request, for debugging - merge moin_support.py with moin.py for easier handling - don’t bother noting pre-renders in the transaction log (#948) - drop unused getSkinTemplateWithDefault method Drop full/simple/minimal and simplify the standard skin, more robust wiki contents. Upgrade notes Customized skins which use the old wikipage_macros template will stop working with this release, as discussed in the 0.42 upgrade notes. If this affects you, you’ll need to make the changes described there now. Page hierarchy - clarify updateWikiOutline and call it on every page view to avoid out of date contents and hopefully avoid the need to call it manually General - skins - drop full/simple/minimal modes from the standard skin. (cf) - drop backwards compatibility support for pre-0.42 wikipage_macros - drop the background colour in top and bottom bars - wikipage template cleanups - rewrote and redocumented setskin, fixed permission declaration not sure I accomplished much. - (commented out) if a “background.jpg” object is present, use it as the body background image (for the main page view only) Spanish and chinese translation updates, and a small fix for setupDtmlMethods. Upgrade notes To get fuzzy urls working in (public) plone zwikis, visit SomePage/setupDtmlMethods after this upgrade. Configuring - make setupDtmlMethods create a standard_error_message in plone (#1117). General - i18n - Updated zh-TW & zh-CN po files (T.C. Chou) - spanish translation updates (Gaspar Quiles) Skin templates cleanup, message-board-style subtopics display, miscellaneous fixes, new portuguese translation and russian/german updates. Upgrade notes Skin macros have been moved from wikipage_macros to wikipage, and some have been renamed. Also some CSS classes have been renamed. Customised skin templates should continue to work normally for now. If you have any, please replace any macro-related occurrences of the following strings in your templates:wikipage_macros -> wikipage_template quickaccesskeys -> accesskeys quicklinks -> wikilinks editlinks -> pagelinks pagename -> pagenameonly The following CSS class has been renamed, you’ll need to update any customised zwiki stylesheets accordingly: - quicklinks -> wikilinks Also a few macros and classes that no-one was using have been removed. I expect to remove the backwards compatibility support for old macros in the next release or three. The standard skin directory has been renamed to zwiki_standard. In CMF/Plone sites, you should uninstall and re-install Zwiki in plone setup (the upgrade link may not work) so that zwiki_standard appears in the portal_skins tool. You can then delete portal_skins/standard and portal_skins/zmi in the ZMI. Editing - be more robust when saving a page with undated comments (#1103) - after voting, redirect to the referer instead of the rated page Page hierarchy - make subtopics template-driven, selectable via subtopics_style property, and dtml-aware. Two styles are now included: “outline” and “board”. The latter shows view counts if mxmCounter is installed. See admin guide. - also recognise a MaildropHost as mailhost - allow mail properties to be configured on a per-page basis Issue tracking - add a full comment form to the issue properties form General - i18n - russian translation updates for the standard skin (Michael Krishtopa) - german translation updates (Jens Nachtigall) - new portuguese translation (João Villa-Lobos) - update pot and po files - misc i18n fixes General - skins - merge wikipage_macros into wikipage template - macro & stylesheet cleanups; use macro for form headings - inline macros in zwiki_plone’s wikipage template, remove wikipage_header and wikipage_footer - rename issuepropertiesformdtml.dtml to issuepropertiesform.dtml - rename noindex slot to searchtags - rename quickaccesskeys macro to accesskeys - rename quicklinks macro and CSS class to wikilinks - rename editlinks macro to pagelinks - rename pagename macro to pagenameonly - rename pagemanagement macro to pagemanagementform - rename standard/ to zwiki_standard/ to be more plone-friendly - move skins/zmi/* to skins/zwiki_standard/ to be more plone-friendly - change to upper case for View _h_istory (Jens Nachtigall, #1081) - add “Revert to this” button in plone diff form as in standard skin (Jens Nachtigall) - clarify revert button wording in both skins - don’t call pageUrl so many times in wikipage Bob was able to measure some speedup from doing this. I’ve redone his patch for the cleaned up template. However I believe it’s necessary to define pageurl again in each macro where it’s used, so it’s in scope for other templates using the macro, so it’s still called a few times per page view. (Bob McElrath, Simon Michael) - hide some urls and things when pages are printed, (anonymous, #1110) - add a pagesByType pythonscript to content/misc General - fix and update broken/failing tests, make all tests pass again (#1104, #1049) - fix a TypeError and AttributeError when CMFMember is used (Jens Nachtigall) - another robustness fix for isEmailAddress and emailAddressFrom - make pageType and pageTypeId public, for troubleshooting - remove no longer used (and broken) wikiOutlineFromParents - remove _createFileOrImage’s unused parent argument - tweak requestHasSomeId, add userIsIdentified alias Some anti-spam features, skin fixes and translation updates. Upgrade notes Translations may be somewhat in flux this release. Configuring - restrict external links: if a max_anonymous_links int property is defined, edits from unidentified users (with no username) containing more than that number of http:// links will be refused and logged - delayed indexing: pages now include the NOINDEX meta tag for 24 hours after an edit, to reduce the chance of spam links being indexed by search engines. Browsing - show page source links when using thorough search, to help with spam cleanup - - make subscribe form help reflect the comments/edits mailout policy - (Jens Nachtigall) - AnonymousUserSubscriptionInPlone (Jens Nachtigall) fixes #878 anonymous subscription in cmf/plone subscribes “Anonymous User” - br_for_subscribed_pages_list_on_subscribeform (Jens Nachtigall) If one is subscribed to several pages, theses pages are on the same line, which looks confusing. Now it is done as with the page_subscribers list on the very same subscribeform, ie an “<br />” is inserted between each page name. General - i18n - regenerate pot and po files, and this time with up-to-date Default comments (#1093, SM, Jens Nachtigall) - mark_PREVIEW_as_translatable (Jens Nachtigall) - chinese translation updates (T.C. Chou) - spanish translation updates (Gaspar Quiles) General - skins - subscribeform_remove-unneccessary-span-tags (Jens Nachtigall) - new api method: lastEditIntervalInHours Bugfixes, new japanese translation, chinese translation updates, new search option. Upgrade notes This fixes a high-profile known issue in 0.39, #1062. If you didn’t already find the issue page and workaround, upgrade to this version to fix it. Browsing - add a “thorough” non-catalog search to search form, commented out (for spam hunting) Editing - fix the keys attribute error when creating pages in plone (#1062) - deleting a top-level page redirects to the front page again; new Search improvements, subtopics ordering, style tweaks, bugfixes. Upgrade notes To remove excess search keywords from existing zwiki-containing plone sites, reindex and replace the Subject index and metadata in portal_catalog in the ZMI. For best search functionality, you may need to /setupCatalog and/or upgrade or reconfigure your text indexes. See for more info. If you have customized the standard skin templates or stylesheet, note a number of CSS ids are now classes. Browsing - SearchPage/searchpage form cleanups, robustness fixes, better results, google search (#1036) - don’t create so many search keywords in plone (#1059) Editing - set RST error reporting level to 2 by default (#992, Stefan Rank) Page hierarchy - support subtopics reordering with a backlinks UI and new reorder method (#1044) - preserve page hierarchy through folder renames; the ‘outline’ information object is now visible in the ZMI and permanent (#728) - don’t redirect after /updateWikiOutline, because it confuses first page creation in plone (#1028, huron) General - skins - don’t add left & right whitespace margins in default skin any more - avoid ids for css-usage where possible (Stefan Rank) - little typo in css eliminated formfield(s) background color setting (Stefan Rank) - underline h2s, gnome-style, shrink them a bit, and add slight whitespace above - don’t reduce default font size in stylesheet - tweak access key help). Configuring - fix dependency on /setupTracker for good large-wiki performance; /setupCatalog is now sufficient (#993) - be compatible with plone’s default_page lines property (#914) ‘contentaction’ items) to be in zwiki-plone-* as well (huron) General - catalog awareness cleanups - make setskin links harmless for anonymous users and bots, instead of logging errors (#1010) Improved rating form, a file -> wiki page import tool, chinese & hungarian translation updates, bug fixes. Page hierarchy - make updateWikiOutline redirect to the updated contents view, and provide an easier alias: updatecontents - don’t log a traceback when deleting a page with out-of-date outline cache Editing - avoid creating both pages when you rename during page creation drop the leaveplaceholder and updatebacklinks arguments from create(). Changing the page name in the create form now always updatesbacklinks, never leaves a placeholder. Rating - new compact rating form at top right. Unrated pages now have rating 1 by default. - rating could fail when there was only one vote - make rating’s average a little more accurate - workaround for clumsy mailhost lookup (#983) - mailin: don’t log incoming message text or traceback for missing text part - make mailout logging less verbose - fix a bug preventing subscribe link from appearing when mailhost’s id is not ‘MailHost’ Issue tracking - don’t match xml/html character entities as hash number issue links (Stefan Rank) General - i18n - zh-TW and zh-CN translation updates (T.C. Chou) - add missing metadata to finnish po files - hungarian translation updates (Jaroli József) - po file typo (T.C. Chou) General - skins - close a cross-site scripting vulnerability in standard error message (SSA-20041122-12, #925, ChrisW) - Hide web only elements when printing from plone (Michael Twomey) When printing from plone certain elements are visible on the printed page, especially the access links. With the default plone print style (plonePrint.css) all links get expanded to full URLs which results in a lot of useless junk at the top of the screen. This patch adds a few plone related CSS classes to certain sections of the page which in turn ensures they aren’t visible when printed. This has no effect on browsers which aren’t aware of the print style sheet. The access keys still work fine in FireFox 1.0. - fix html title in plone sites - rearrange/reword wiki actions in cmf/plone, drop filter issues action - tweak page management help in cmf/plone - display site_header in standard skin header, if present - use canonicaIdFrom to help link user names in recent changes - change RecentChanges ZCatalog query-style to avoid deprecation warning (Stefan Rank) General - add tools folder and a basic import-files-to-wiki utility Installing/upgrading - another fix for moin regexp encoding at startup (#963,#971,#972) Installing/upgrading - fix zwiki startup on systems with locale and non-utf-8 encoding (#963) Skin switching in CMF/Plone, better MoinMoin markup support, edit preview, easier issue page names and links, russian & hebrew translations, bugfixes. Upgrade notes Skin switching hotkeys have changed slightly Issue pages have a new #N naming scheme. Your old IssueNo pages will still work; upgradeAll will rename them all. We now add an optional Zwiki skin in CMF/Plone sites at install time; if you want to enable skin switching within CMF/Plone, re-install Zwiki in plone setup -> add/remove products Installing/upgrading - fix startup problem due to path separator on windows (Stefan Rank) - upgradeAll: log elapsed time (Bob McElrath) - upgradeAll: upgrade issue pages to new short names by default - upgradeAll: replace partial_commit flag with a numeric batch argument - upgradeAll: reindex every page, as well; slower but more thorough - upgradeAll: log total number of pages - simplify upgradeAll & upgrade, do less committing Creating/configuring - show subtopics is on by default; use a false show_subtopics boolean property to disable - make add zwiki web use a BTreeFolder2 when possible (untested) - don’t send mail when setupTracker creates dtml pages Browsing - adjust skin switching keys, allow switching between (uncustomized) plone and standard skin in CMF/Plone sites (re-install zwiki and try alt -) - fix the diff link in “last edited by” - user options: remove unnecessary options; don’t show logo/heading, consistent with other forms Editing - don’t require a username to show page management form - page management form permission-sensitivity and text improvements - plone editform: call page format “page type” like everywhere else, list WWML page type along with the rest, add a cancel button as in standard skin - don’t display the page name twice when you cancel an edit - improved moin markup support: replace the old moin markup code with actual MoinMoin code (bounty from Canonical) - add preview button in editform (#933) (bounty from Canonical) - fix a case where renaming could fail due to out of date wiki outline Page hierarchy - more skin cleanups; allow reparenting via backlinks in plone and in minimal mode - reparenting fixes: prevent the first page appearing as an extra parent, handle old tuple parents properties better (#952) - mailin: once again post messages without a page name to a default page, unless default_mailin_page property is blank - be smarter about finding a MailHost; a page of that id will no longer break mail-out Issue tracking - use short #N names for new issue pages; old IssueNo pages still supported but deprecated; upgradeAll will rename them - issuetracker: layout tweaks, also list severity counts - filterissues: show category-severity matrix, instead of category-status; I think this may be a more useful default - filterissues: alter issue counts in totals matrix based on severity selection - filterissues: fix this old chestnut again.. a FilterIssues page’s action should point to itself, not to the template Fit tests - fit: display a nice warning at the top of the page when there are fit tables and we can’t import fit General - i18n - update italian translation (Lele Gaifax) - new russian translation (Denis Mishunoff) - new hebrew translation (Ofer Weisglass) - hack i18n for unit tests, possible slowdown ? General - new method: wikiPath() - rename protectEmailAddresses Spaced wikiname display, a basic MoinMoin page type, better search engine indexability, i18n updates, bugfixes. Upgrade notes Nothing special to do. General - partial MoinMoin markup support (Chad Miller/Simon Michael) - display wikinames with spaces when there is a true ‘space_wikinames’ folder property - Don’t change zope’s current working directory (IssueNo0919, Kai Hoppert/Bob McElrath) - allow page types or a ‘zwiki_content_type’ property to set the HTTP content-type header (Bob McElrath/Simon Michael) - give a warning message rather than a python error for edits containing banned links (Bob McElrath/Simon Michael) - omit error-generating Structured Text tables instead of showing a traceback (IssueNo0692) - increase default summary size to 200 letters - new methods: spacedPageName, Subject, Description Skins and content - provide keywords and description meta headers for better search engine listings - recent changes: make period buttons lower case - plone: fix double doctype header (Bob McElrath) - plone: add External Editor action for Wiki Page type (Bob McElrath) - don’t show “no subscribers” by the add comment button - make sure page- & skin-based tracker forms are all up to date again Mail and comments - decode quoted printable mailins (get rid of =20’s) (Bob McElrath) I18n - removed the workaround to fix utf-8 encoding with PTS, as I seem to no longer need it; if your utf-8 encoding breaks, please see I18nSupport.py and report - i18n updates (Nicolas Laurent) - french translation updates (Nicolas Laurent) - german translation updates (Andreas Mayer) - new partial finnish translation (Jyrki Kuoppala) - internationalise “new” in mailout subjects - fix incorrect value;title syntax in page management form i18n (Bob McElrath) Beginnings of a plugin architecture; mailin simplifications and enhancements; new hungarian translation; miscellaneous skin enhancements and bugfixes. Upgrade notes Legacy support for the posting_policy property has been dropped, use mailin_policy instead. The mailin method no longer supports arguments other than the message itself; control it with folder properties and choice of mail alias instead. General - add firstPageUrl & lastPageUrl methods - include() dtml utility now supports show_subtopics keyword argument - fix bug preventing use of show_subtopics keyword in urls - add URL to the list of ids to avoid.. it breaks standard_error_message at least - code refactoring to support a more modular architecture; PurpleNumbers, Fit, Rating, Regulations and Tracker support are now (somewhat) in plugins subdirectory - link uploaded files properly on RST pages (Marius Gedminas, IssueNo0814) - do away with the unnecessary ALLOWED_PAGE_TYPES list as Bob suggested - Allow empty RemoteWikiURL name (Bill Page) It is sometimes useful to allow the remoteURLname to be empty. That is we give a WikiName that looks like it is remote but really it is not (at least not yet ... :). The old code caused a : to be inserted. - comment out unused code breaking mailout (IssueNo0890) - also check comment subjects and log notes for urls from the banned_links list - likely fix for false contents in new wikis (IssueNo0903) Skins and content - make hidden access-key links conditional like the visible ones avoids errors in the logs due to spiders viewing non-existent tracker, eg - let next/prev access keys wrap around when on last/first page - show “last edited by” in minimal mode - make the discussion link look for UserDiscussion before GeneralDiscussion - plone diffform: use n & p access keys for next/previous - diff form revert button (Bob McElrath) Updated revert patch: 2) Add “Changes in revision N” to header 3) Add “Log note:” 4) Add “Return to page” button. 5) Grey out next/prev buttons on first/last diff. - html tweaks to support expandable bullet list patch - fix double skin with wikipage() method in cmf/plone (IssueNo0888) Mail and comments - mailin: more code cleanup, add experimental spam url reporting. Sending mail to a mailin address of the form spam@... will add all urls found in the message to the banned_links folder property, if there is one. - mailin: drop support for configuration via arguments - mailin: support default_mailin_page folder property - mailin: update doc, drop support for old posting_policy property - mailin: don’t require an initial word boundary when matching mailin aliases - mailin: allow partial bracketed page names in subjects - don’t send out mail-outs which have an empty body Tracker - make createNextIssue choose correct issue number in authenticated wikis - fix a python 2.3ism causing issue tracker to fail with zope 2.6 (IssueNo0897, koegler) I18n - add utf-8 content-type header to mail-outs - new hungarian translation (József Jároli) - german translation updates (Andreas Mayer) - dutch translation updates (Jaap Noordzij) - brazilian translation updates (Jean Rodrigo Ferri) Minor fixes, mail code cleanups, translation updates. Upgrade notes No special upgrade issues. General - allow page types to change the HTTP content-type header (Bob McElrath) - make _ work during unit tests again Skins and content - fix unnecessary python 2.3-ism in editform - zope 2.6 compatibility fix for full/simple/minimal getattr error (IssueNo0875) Mail and comments - refactor mailin, sendMailTo and other mail code - mailin policy change: messages with no page name in subject are now discarded by default - fixes for mailin of new page/new issue (IssueNo0879) - use “anonymous” as real name for anonymous mailouts Tracker - createNextIssue updates: return the page name, support sendmail=0 argument - support pages(isIssue=1) when there is no catalog I18n - updated the italian message catalog for 0.32.0 (Lele Gaifax) - polish translation updates (Jakub Wisniowski) - chinese transation updates (T.C.Chou) Zwiki’s skin is about fully internationalised; translation updates, new polish, dutch & german translations; skin cleanups; pluggable page types; refactoring, bugfixes. Upgrade notes zwiki_plone/wikipage_view template has been renamed to wikipage If you use a character encoding other than utf-8: zwiki has been hard-coding utf-8 since 0.31 - see IssueNo0855. General - page types refactoring - moved to separate modules in pagetypes/, new short ids, pluggable API docs - simplify CMF/Plone rendering and page view url - fix “some permissions had errors” message (IssueNo0842) - don’t let any failure to update links stop a rename - don’t let an empty allowed_page_types property break page creation Skins and content - allow python scripts in wiki templates (Bob McElrath) - tighten up page management form layout and add a subtopic creation button - new include() convenience method for dtml (“dtml-var include(SomePage)”) - diffform: full history button was not checking correct permission (Bob McElrath) - rename shade2 class to formfield, use consistent background shading in form elements - standard: make form inputs & selects inherit font size - standard: remove subtopics option from editform, use ZMI instead - zwiki_plone: rename wikipage_view to wikipage to be at least self-consistent - zwiki_plone: remove unused wiki_icon.gif - xhtml fixes (Bob McElrath, Simon Michael) Mail and comments - make citation formatting in comments more robust (IssueNo0863) - allow a per-page mailout_policy (Bob McElrath) - fix mailout of edit with empty log note via PUT (Bob McElrath) - subscribeform: remove duplicate you are not subscribed - Do not add extra space on mailin Subject lines. (Bob McElrath) Tracker - don’t treat all pages beginning with a digit as issues by default - make issue tracker layout cleaner and more consistent in standard/plone - issue properties form: allow a longer optional note, up to 100 chars - disable subtopics display on the IssueTracker page by default when doing /setupTracker?pages=1 I18n - new german translation (Andreas Mayer) - chinese translation updates (T.C.Chou) - italian translation updates (Lele Gaifax) - french translation updates (Nicolas Laurent) - polish translation updates (Jakub Wisniowski) - new polish translation (Marek Ciesielski) - new dutch translation (Jaap Noordzij) - more i18n and fixes in python, page templates and dtml (Jakub Wisniowski, Nicolas Laurent, Simon Michael) - set utf-8 content-type header in dtml also to fix character encoding - fix occasional i18n errors by making _() return a string (IssueNo0856) - split po files into zwiki and plone domains to fix CMF action translations; translate at run time, not install time; (IssueNo0833) Much i18n progress including a number of new translations; tracker cleanups and usability improvements; permissions fixes; tests, code cleanups, bugfixes. Upgrade notes Similar to 0.30. changeIssueProperties() “title” argument is now “name”. General - always update links by default when renaming (page management form was not) - don’t show a .svn subdirectory in add wiki form (Lele Gaifax) - make image and file links sensitive to page type (IssueNo0737, peterq) - fix some missing security declarations in 0.30 (IssueNo0796) - make page rating not require python 2.3 (IssueNo0794) - better comment formatting on reST pages (Lele Gaifax) - support reST in Zope 2.7.1 (Lele Gaifax) - cmf/plone: forcing a sole page’s name field to FrontPage was unhelpful, remove - link banning: edits containing urls in the banned_links property will fail (for spam prevention) - make summary() smarter, include only the first paragraph by default - create() can now also set parents I18n - plone editform: don’t show duplicate pages types due to i18n problem (IssueNo0823) - standard and plone skin i18n (foenyx, simon) - french translation updates (foenyx) - italian translation (Lele Gaifax) - portuguese-brazilian translation (Jean Rodrigo Ferri) - traditional and simplified chinese translations (T.C. Chou) - always send the HTTP header to help character encoding negotiation - renamed LocalizerSupport to I18nSupport; we now try to use PlacelessTranslationService first - PTS-based i18n support for python code and DTML - fully automate pot extraction Skins and content - show current page in contents by default again This is much more convenient for a human user. The drawback is more hits to contents (one for each page) from robots which treat #ref as a separate url. - defaultPage() and the wiki navigation links were ignoring a default_page folder property - standard wikipage.pt: add a left/right layout table around ratingform - make plone action urls point to customized pages if present (IssueNo0786) - use grey background for pre and code sections by default - include zwiki.org percentage styles - make in-reply-to indicator more discreet - explicitly set foreground and background color, for people with dark backgrounds - simplify ratingform template - rename zmi forms and move to skins/zmi/ Mail and comments - make comment subjects appear in recent changes again (IssueNo0821) - don’t discard mail to TestPage page subscribers (IssueNo0672) - make emailAddressFrom() smarter; handle non-cmf-members who have an email address pref; highlight problem users in the recipient list instead of quietly dropping them Tracker - usability & page layout improvements; highlight current sort & filter options - always parent new issues either under IssueTracker or at the top level (not all over the wiki) - protect issue properties with zwiki edit permission (IssueNo0278) - don’t require rename permission to change other issue properties - change issue “description” to “name” - new issuepropertiesform skin method & template - simplify and clean up issue tracker & filter issues DTML - tracker code refactoring; move tracker setup code to Tracker.py; new methods issueNumber(), issueName() - support for issue page names beginning with a number or #number (commented out) - removed support for custom ‘isIssue’ python script - add missing tracker unit tests file Page rating, fix epoz support, bugfixes, code cleanups, i18n work, a french translation. Upgrade notes You should probably remove the ‘links’ field from your catalog’s indexes and metadata now as it isn’t needed and leads to slowdowns in Plone+Epoz sites (IssueNo0784). If you want to do zopewiki-style queries on page ratings, you’ll need to add the new rating and voteCount fields to your catalog. Running PAGE/setupCatalog?reindex= will take care of this. If you have Localizer installed, you may need to apply the workaround at . General - page rating support: new methods rating, voteCount, and ratingform are supported and enabled in the standard skin by the ‘Zwiki: Rate pages’ permission. rating and voteCount are cataloged by default. - our ‘links’ catalog field caused slowdowns with plone+epoz, try life without it (IssueNo0784) - obey LEAVE_PLACEHOLDER default when renaming via edit form (IssueNo0579) - renaming to a page name beginning or ending with spaces was not reparenting the children - make parent and firstParent more robust (IssueNo0788) - “completed” log messages for long admin operations (eg cataloging) - make Url methods fall back to ‘’ instead of None - drop show_navlinks property - allow partial page names when reparenting - do citation formatting only for >’s at column 0 - this leaves ::-quoted python examples alone, and seems to still catch normal comment citations. It doesn’t prevent all unintended citation rendering though. (IssueNo0770) - pages named REQUEST, RESPONSE, or Epoz caused problems for zope, possibly breaking the whole wiki (ZMI too). These are now handled safely by adding X to their id. - fix page creation with zope 2.6/python 2.1 (IssueNo0777) - code refactoring, new tests Skins and content - fix plone skin breakage due to in multiline TAL (IssueNo0792) - epoz fixes: make editform work with epoz 0.8, simplify installation, don’t show parent page content when creating, remove duplicate source-mode checkbox (IssueNo0773) - standard: remove last interfering leave placeholder option (IssueNo0764) - french translations (Nicolas Laurent) - update po files - don’t show subscriber count when mailout is disabled, and don’t italicise it - use small font in the dimtext style - contents page heading tweak - remove unused navpanel2 macro - rename ‘default’ skin to ‘standard’ - xhtml fixes (Alvaro Cantero, Bob McElrath) - standard: some i18n attributes, html formatting - standard: add badtemplate error template Mail and comments - fix mailin destination with multiline subjects (IssueNo0547) - drop fewer_headers feature - comment mailouts now appear the same with either ‘comments’ or ‘edits’ mailout policy. - rename MessagesSupport & Messages.py to Comments*; refactor mailin & comment code, use new email lib, new tests Bugfixes for Zope 2.7 & Plone 2, skin tweaks. NOTE: immediately following this release I am switching to darcs as primary revision control system and considering CVS to be frozen. Please see for new procedures and discussion. Upgrade notes Nothing special; check past upgrade notes if appropriate General - wiki templates (added via ZMI Add ZWiki form) can now import zexp, xml, jpg, png, gif and pt files, and run a setup_WIKITYPE external method if present. This allows, eg, a latex-enabled wiki template to be distributed with the LatexWiki patch. (Bob McElrath) - control page creation with add, not edit permission - don’t let python locale problems prevent product initialization (IssueNo0392, IssueNo0769) - fix missing Reparent permission (IssueNo0748, IssueNo0757) - wikiOutlineFromParents could break with zope 2.7, fixed (IssueNo0738) - be more robust with junk in the catalog when adding/viewing a page (IssueNo0722) - fix “must be lazy sequences” error in btree-based folders (IssueNo0713) - include .cvsignore files in CVS Skins and content - xhtml fixes (bob mcelrath, cyrille lebeaupin) - default: show next/up/prev links in full mode by default - default: skin template code cleanups - default: make sure small fonts are smaller than body - default: use “options” instead of preferences, access key “o” - default: hide explicit backlinks & diff links again - not needed - default: tidier & customizable prev/up/next links - default: show parent context in simple mode too - default: try the linear parent display in full mode - default: default to a smaller font - default: tweak margins - access keys ”,” ”.” “/” for full/simple/minimal - access key “m” for subscribeform - show wiki contents, not front page, when going up from a top-level page - now using PTS-compatible message catalogs in i18n/ - tell Epoz to look in epoz/ instead of ./ for it’s files for easier setup Mail and messages - a citations whitespace fix (bob mcelrath) - obfuscate email addresses in messages slightly (bob mcelrath) - disable mailout formatting (IssueNo0696, bob mcelrath) - fix subscribing to the whole wiki (IssueNo0754) - tweaks to support ezmlm integration Tracker - make setupTracker do robust re-indexing like setupCatalog - don’t let metadataFor and issueColour mistake zodb objects for page properties (IssueNo0732) Bugfixes, tests, refactoring. Upgrade notes Nothing special. If your tracker doesn’t sort, see below. General - make wiki outline handling more robust, preventing IssueNo0727 and similar (when it exists but is None for some reason, regenerate) - new pages were appearing twice (title and id) in the wiki contents - diff no longer triggers python crashes on freebsd - make indexing more robust and more verbose during setupCatalog - make history transaction notes a little less verbose (“findlinks”) - don’t log “reparenting children” during delete when there aren’t any - remove old regulations permission Skins and content - fix traceback when creating/editing pages with recent plone/zope versions (cmf/plone ‘allowed_types’ global overriding our ‘allowed_types’ define) (IssueNo0726, Christian Heimes) - don’t catch and display skin template errors any longer; zope’s own traceback is much better - default, plone subscribeform: clean up the subscribeform a bit and make consistent in both skins. Display a message when there is no email address. We now try to hide the email field and button when we can use the CMF member email property instead. The logic here is a bit rough and may not work in all situations. - do not produce an empty stylesheet, which confuses Firefox EditCSS (Bob McElrath) - use a single contents URL (based on the default page) to reduce robot activity - make subtopics & comments headings bold, remove overline - moved subscribe back to the page tabs area to create better visual separation of “page” and “wiki” operations - don’t show the subscribe tab in plone if mailout is not enabled - basic: default page cleanups - include print.dtml, toc.py, view_source.dtml scripts in misc Mail and messages - performance fix: usernamesFrom was too expensive, occasionally causing plone.org to max out cpu for 10 minutes at a time with it’s 7000 users. So now, we don’t try so hard to match up email addresses with CMF member ids - subscribers should use one or the other. This should fix it. (Alan & plone team, Simon) - correctly render pages containing only messages (Bob McElrath) Tracker - a tracker created with zope 2.7 would not sort issues (IssueNo0720). Note if you have already created a tracker with 2.7, to fix sorting you’ll need to visit PAGEURL/index_object on each issue page. Fixes a couple of problems with 0.27. General - fix last minute bug causing updateWikiOutline to fail with pageName attribute error (IssueNo0694)** - don’t break if plone 2’s document_actions is not present (IssueNo0701) - improve robot robustness for large public wikis by 1. removing potentially expensive clearCache and print links from the skins, and 2. making scroll-to-current-page in contents (with it’s consequent many unique expensive urls) optional and off by default (IssueNo0699) New, faster page hierarchy implementation; access keys everywhere; page management form and action icons in plone; utf-8; bugfixes. Upgrade notes Two upgrade issues this month: If you have non-ascii characters in your page names or content: the default skin used to specify iso-8859-1 character encoding; it now specifies utf-8. So your wiki’s character encoding may change, and non-ascii characters may no longer display correctly. You’ll have to convert these manually (see for tips). If you were running the 0.27cvs code: you might be unable to view or add pages after upgrading to 0.27 final. To fix, visit SOMEPAGE/updateWikiOutline, or /upgradeAll. (If this happens to you and you weren’t running 0.27cvs, I’d like to hear from you). General - page hierarchy is now cached in a wiki Outline object, stored as a (hidden) folder attribute, instead of being generated from the page parents attribute every time. All hierarchy operations should now be fast (so it’s safe to enable show_subtopics and show_navlinks in large wikis) and hopefully scalability (eg conflict errors) is no worse. - fixed a reStructuredText import bug with zope 2.7 (IssueNo0687) - under rare conditions viewing a page could give ‘pagenameandX’ attribute error - new next/previousPageUrl methods - upgradeAll now checks each page’s parents information and repairs it if necessary; it also regenerates the saved wiki outline. This restores any pages missing from the contents view. You can also visit /checkParents to fix a single page. - revert a mistaken “fix” to manage_afterAdd/afterClone/beforeDelete (IssueNo0684) Skins and content - the default skin now specifies the utf-8 charset, like zwiki_plone, and the wikiname regular expressions can now handle multi-byte utf-8 encoding. This may cause some page names or wiki content to be displayed incorrectly after upgrading, but we don’t attempt any automatic migration right now. (Simon, traldar, Samotnik) - added a number of access keys, following w3c guidelines, and made them work in all views, both default and plone skin. Alt-0 displays a list. - don’t hide subtopics in minimal display mode any more; revert to simple “subtopics” heading - plone: make document actions (external edit, send to, print) appear on wiki pages (Alexander Limi) - plone: clean up the comment form a bit, and show the page management form when the user has permission - default, plone: make diffs slightly more bookmarkable - default, plone: the useroptions template was not redirecting quite right, so options didn’t stick (anonymous) Mail and messages - when using “edits” policy, comments weren’t sending mailouts (Joachim Bauch) Tracker Epoz support, skin layout tweaks, bugfixes. Upgrade notes No special upgrade issues. General - wiki links in HTML pages work again (IssueNo0664) - simplify and fix locale awareness of bare WikiNames - these should now recognize upper- and lower-case characters defined by the system locale, if configured. - possible fix for startup problem with recent zope 2.7 beta (DateTime Syntax Error) (IssueNo0649) - add python 2.3 encoding declaration in LocalizerSupport.py to prevent startup warning Skins and content - default, zwiki_plone: the editform now uses Epoz 0.x, if installed, to edit HTML-only pages in WYSIWYG fashion. To see this, install the Epoz 0.x product; edit one of your pages and change it’s type to HTML; edit again to see the new form. See for more. - default: streamline editform and other forms’ layout (remove editform tabindex attributes, they only add confusion; make page type more prominent at top right; add cancel button; remove unnecessary large headings) - default: include backlinks & diff links in the links panel. It now includes pretty much all site and page views. - default: rearrange access keys: use r for recent changes, c for contents, u for (un)subscribe, b for backlinks - default: fix minor typos and errors - zwiki_plone: add back the by-line in wikipage_view (limi) - zwiki_plone editform: update to the new Plone 2 forms format; fix tabindex so it works again; rewrote the help texts (limi) Mail and messages - some mailing list headers in mailouts were empty when there was no mail_from property. Now they use mail_replyto in that case. (IssueNo0673) Tracker - disable old broken issue navigation links Fully-functional non-DTML wikis are now supported and default; refactored default skin with macros, CSS and SkinnedFolder support; more zwiki features available in CMF/Plone; quicker comments, improved search, easy tracker setup, much code refactoring, etc. Upgrade notes Many changes, you may want to hold off and watch KnownIssues for a week or two. The page_type property is no longer a simple string. Customized skin templates should continue to work. Downgrading from this release may cause temporary problems. If you have DTML-using wikis, you will need to take steps to re-enable DTML pages after this upgrade. You can either: 1. re-enable DTML: set the allow_dtml property as described below; or 2. become a non-DTML wiki: remove or rename your RecentChanges, SearchPage, UserOptions, IssueTracker or FilterIssues pages. If you’re using the default skin, the site navigation links will adjust automatically; if you’re using the zwiki_plone skin you’ll need to make such links yourself. This release includes additional CMF/Plone action links; re-install Zwiki in your CMF/Plone site to see these (or add them manually in portal_types -> Wiki Page -> actions). General - no DTML by default DTML is off again by default, for real this time. (These days it’s too great a surprise for people who just want a wiki.) To enable DTML in pages you must now set a true ‘allow_dtml’ property on your wiki folder or above. (A ‘no_dtml’ property, any value, will override this and disable all DTML as before.) See also “new skin-based views” below. - new page type objects Pages now store their type as a PageType object, rather than a string. These encapsulate behaviours that are specific to each page type. The page_type attribute is no longer accessible via the ZMI, and it is no longer a simple string, though in simple DTML usage it will return the same value as before (DTML and skin authors take note). Legacy skin templates should continue to work. - search has been enhanced; it now lists matching page names and prints text excerpts, with or without a catalog (Dean Goodmanson, Simon Michael) - comments no longer re-render the whole page, so are much quicker on large pages - when enabled (via show_subtopics property), subtopics are now always displayed, except in minimal display mode. This makes hierarchy more useful, but also will make large wikis slower and use more cpu time when bots crawl the wiki. - By default, don’t leave placeholder pages when renaming any more; configurable in Default.py - permissions fix: since splitting out the UI class, methods such as editform were not respecting permissions - we no longer need to set up an allowed_page_types property to disable WWML in plone - Unconfirmed fix for the DateTime error with zope 2.7/python 2.x (IssueNo0649) - more robust reporting of structured text formatting errors - fix typo in updateCatalog’s exception logging - a SITE_CATALOG property no longer causes problems for setupCatalog (it will use the specified catalog name) - fix a missing import which allowed problem pages to stop upgradeAll - code refactoring: extract Editing and Utils mixins, many cleanups. - new defaultPage, defaultPageId, defaultPageUrl methods look for a default_page property, or a page named “FrontPage”, or the first page in the folder. Other new methods include: pageCount, pageIdsMatching, pageNamesMatching, *Url helper methods for the site navigation links - reparenting cleanup, bugfixes - allow creation of new pages from a webdav-locked page - cleanups, bugfixes, speedups for rename & delete plus some new utility methods. rename was doing an unnecessary updatebacklinks pass and reparenting all offspring not just it’s children and just generally trying too hard. - set correct last editor when updating backlinks during rename - found the bugger that was leaving all those placeholder pages when renaming - log more verbosely during renames to show what’s going on. - add update backlinks functionality to delete, similar to rename, so you can enter a name in the page management form to have all links redirected there after deletion (a “replacement page”). - disable subtopics display entirely unless there’s a true show_subtopics folder property - fix “global name ‘p’ is not defined” when adding pages via plone UI Skins and content - default, zwiki_plone: new skin-based views - instead of the DTML-based RecentChanges, SearchPage, UserOptions, IssueTracker and FilterIssues pages, new wikis now use skin-based views (similar to editform etc) by default: recentchanges, searchwiki, useroptions, issuetracker and filterissues. As usual, these can be changed by customizing the page template of the same name, in the wiki folder or in the CMF or SkinnedFolder skins tool. If you later want to switch back to page-based versions, eg for easier tweaking, you can call /setupDtmlPages and /setupTracker. - default: the main wikipage template has been completely redone using METAL macros and CSS (Dan McMullen) - default: the default skin uses a new UI method, stylesheet, to find it’s stylesheet. This can be customized with a File named either “stylesheet” or “stylesheet.css”. A page template or dtml method may also be used, in which case the stylesheet will be dynamic and will be reloaded for each page view. - default: adopt the latest zwiki.org look as default - use a sans-serif font by default, use full browser window, indent page content, cleaner/more styled context and subtopics. Has been tested only with mozilla 1.6; assume other browsers can cope. - default: smarter site navigation links; work with page-based or skin-based views as appropriate; show issues link only if there are issues - make our skin directories available to FileSystemSite if installed, to support a SkinnedFolder setup - contextX and renderNestingX are skinnable versions of context and renderNesting, which may be used to present page hierarchy in more flexible ways. (Dan McMullen) - fix an obsolete try except that was hiding errors in the default wikipage template, show such errors in the browser as elsewhere - default, zwiki_plone editform: reduce default text area height to 20; fix tab indexes so save button is last - FrontPage: note webdav lock permissions in EE setup docs - RecentChanges, recentchanges: make the log note bold when summaries are displayed, too; hide buttons when there is no catalog; simplify the code, clean up and indent - HelpPage: content, formatting updates - instead of a horizontal rule, use a styled heading for comments section, consistent with subtopics - zwiki_plone: fix the contents link, rename recent changes link - default backlinks: the upper checkboxes were being ignored - default, zwiki_plone recentchanges: international characters in page names were being quoted - default, zwiki_plone: move the comments heading style to stylesheet and use the anchor “comments” instead of “messages” - zwiki_plone: move some tabs into the “actions” category, for now, after discussing with limi - default contents: fix string: typo - default, zwiki_plone: make filter issues accessible via link from issue tracker and vice-versa - default: use standard stylesheet and site links panel in all views - zwiki_plone: install actions (tabs) for the new skin views (leave useroptions disabled) - default, zwiki_plone: fix the issuetracker & filterissues forms - default, zwiki_plone: latest zwiki.org search form with hit counts - remove the contents link from page context in full mode, it’s superfluous now - remove confusing #parents tag from default content files.. we don’t need it Tracker - setupTracker now does not install dtml pages by default, sets up an issue_colours property, creates one dummy issue if there are none, and redirects to the issue tracker afterward - make status counts work for statuses other than open/pending/closed - new methods: issueCount, hasIssues”. General - change ZMI add menu’s ‘Wiki’ to ‘ZWiki’, to locate it back beside ‘Z ‘View history’ permission - the access key for backlinks is now L everywhere - zwiki_plone: tidy up wikinav_portlet a little Mail and messages - fixed never-encountered bug in subscriber list upgrading (IssueNo0540) Permission renames, useful page/catalog/tracker setup methods, more functional default skin with slow but nifty hierarchy navigation options, default content updates, bugfixes. Upgrade notes Three permissions have been renamed, and wherever you have configured these (‘Add ZWiki Webs’, ‘Add ZWiki Pages’ & ‘Zwiki: Add comments to pages’) you’ll have to set them again with the new names. You may want to review your settings before upgrading. Sorry; it should be easy to write a script to automate this, I don’t have one. As usual, but especially if upgrading from a very old version, it doesn’t hurt to visit SOMEPAGE/upgradeAll once as manager (except, it may be slow in large wikis). This will ensure all your pages have the latest attributes and have been rendered with the latest code. Do this if you want to use the subtopics display option below. Regulations support was not working and has been dropped - I don’t expect this to affect you, but if I’m wrong join the discussion on . General - permissions & ZMI menu renames for usability/consistency: - ‘Add ZWiki Web’ in the ZMI add menu is now ‘Add Wiki’. (‘Add ZWiki Page’ remains as is to avoid changing the meta type). - ‘Add ZWiki Webs’ permission is now ‘Zwiki: Add wikis’ - ‘Add ZWiki Pages’ permission is now ‘Zwiki: Add pages’ - ‘Zwiki: Add comments to pages’ permission is now ‘Zwiki: Add comments’ - new wiki setup methods From any page, visit these urls as manager to quickly set up standard wiki features: - setupPages - install default wiki pages (like Add Wiki in the ZMI) - setupDtmlMethods - install index_html and standard_error_message - setupCatalog - install or configure a wiki catalog, with appropriate configuration for optimizing large wikis. CMF/Plone notes: this one is called automatically by the CMF install script, to simplify installation. It adds all the indexes and metadata that Zwiki expects from a catalog (see setupCatalog or for a list). NB these will apply for all catalogable plone objects, not just zwiki pages, but will (hopefully) be empty/harmless for the non-pages. - setupTracker - configure the wiki & catalog for issue tracking and install the IssueTracker and FilterIssues pages These also work inside CMF/Plone, except for setupDtmlMethods which may not work there. - pages() now always returns brains with complete metadata, even when a partial catalog is present. This allows a number of things, eg creating/renaming in a cmf/plone site where setupCatalog has not been called, to keep working, at the cost of more zodb access (IssueNo0623 and others) - Hierarchy navigation links and subtopics links If your wiki folder has a true ‘show_navlinks’ boolean property, the default skin in full mode will show a navigation links panel. These are GNU Info-style next/previous/up links, with N/P/U quick access keys respectively. See zwiki.org for an example. If your wiki folder has a true ‘show_subtopics’ boolean property, and the user is currently in full mode (their zwiki_displaymode cookie is “full”): pages with children will have a “subtopics” panel displayed, after the document part and before any messages. (This will appear after the next edit, or after running upgradeAll.) Subtopics display can also be turned permanently on or off for a specific page, and all pages below it in the hierarchy, by setting an option in the default skin’s editform (requires ‘Zwiki: Reparent pages’ permission). We may find a way to simplify this later, ideas welcome. Both of these options (navlinks and subtopics) are expensive in a large wiki, even if you set up a catalog to optimize it (and you should). When active they will slow page rendering down a lot (ballpark figure: zwiki.org FrontPage goes from 0.1 to ~1 second). - regulations support dropped - this has been broken for some time, and noone used it anyway because it makes life too complicated. I’ll be removing and mothballing the code next month, assuming no-one steps forward to make it worth keeping. We might revive it in a simpler form later. - there is a new PREFER_USERNAME_COOKIE global option in Defaults.py; set this true to allow username cookie to override authenticated name. Useful for a simple community wiki protected by a single shared login. - STX footnote links were broken by 0.22’s refactoring (IssueNo0605) - disable iframe tags by default, like javascript (IssueNo0588) - wiki link regexp tweak: remove unnecessary restriction on the end character of bare urls, and allow single-character remote parts in interwiki links - fix a bug in rename where children’s parents field was not updated, leading to an error when you tried to view that child - make renaming’s update backlinks pass more robust and more verbose (IssueNo0594, James Collier) - make post-delete redirect work when the parent is freeform-named - saving a page via ftp or webdav after removing the blank line between headers and text was giving an error. (Dan McMullen) This should fix the error (but may not do what’s wanted..) - misc. code cleanups: move admin-related methods to Admin.py; begin long-awaited Parents.py cleanup; move the WikiNesting code into ParentsSupport and simplify; various new methods. - convert more unit tests to ZopeTestCase; make MockZWikiPage a bit more useful; first unit tests for cmf/plone and page hierarchy functionality Skins and content - default: the default skin has been rearranged somewhat, as at zwiki.org; cf recent GeneralDiscussion. Less gray, widgets grouped into panels, site links moved to top, annoying quote and user bookmarks dropped, navigation links support. Note the various site links appear only if you have a corresponding page, following the names used at zwiki.org. - default wikipage: freeform names now work when reparenting via the page management form. Only a single parent can be set there now. - zwiki_plone wikipage_view: fix freeform links in the ancestor list (IssueNo0603) - default, zwiki_plone editform: when creating a new page, default to the wiki’s default page type, not the parent page’s type - default editform: fix an editform breakage with zope 2.5, apparently due to a TAL difference - default, zwiki_plone backlinks: make the reparent form action url more robust, to fix reparenting in CMF/Plone (IssueNo0610) - default backlinks: always explicitly list a page’s parents, as well as it’s backlinks; don’t show reparent controls if user does not have permission - a wikipage() UI method has been added, which you can use (in portal_types -> Wiki Page -> actions -> view) to allow a non-CMF skin like CommonPlace to be used inside a CMF/Plone site. Possibly just a temporary hack. - quick access keys in the default skin have been rearranged - see QuickReference - RecentChanges: latest zwiki.org code, catalog-optimization-compatible, comment out old brute-force code which should no longer be needed, fix last edit times when there is no timezone cookie - UserOptions: make it function inside a CMF/Plone site too (useful with alternate skins); remove no longer used quote & bookmarks options; include valid Pacific timezones (except for a couple which DateTime doesn’t support) (IssueNo0595, James Collier) - SearchPage: catalog search improvements: make help text always visible, search only within current wiki, also show page title hits (but the default code assumes Title index is a TextIndexNG2 with left globbing and case folding enabled, otherwise may not work so well) - zwiki_plone: the experimental skin-based versions of DTML pages are now generated automatically and the tracker has been added (untested). These are: recentchanges, searchwiki, useroptions, issuetracker, filterissues. Mail and messages - mailout signatures now include the precise message url - a whitespace + horizontal rule separator is inserted before the messages section. You’ll see this after next page edit or upgradeAll. - update catalog after subscribing/unsubscribing (fixes non-updating “other page subscriptions” when a catalog is present) - new ‘fewer_threads’ option, disabled by default, tries to gather comments under a single thread per page per month Tracker - creating issue pages without first setting issue_* folder properties didn’t work.. it should work now. The necessary folder properties will be installed as needed. - IssueTracker: remove “how to report a bug” link; latest-catalog-compatible code (so we don’t always return all pages) - FilterIssues: move search form to the top; use pages() method to ensure we search only this wiki; latest-catalog-compatible code Memory efficiency/performance/scalability improvements; simpler page types and DTML control; zwiki_plone and default skin updates; wikimail tweaks; STX images; bugs, fixes, features. Upgrade notes Control of page types and DTML has been simplified. NB in this release wikis tend to allow embedded DTML by default. To disallow it, add a ‘no_dtml’ property as described below. The backlinksFor() and pages() methods have changed; Title() is now preferred instead of title_or_id(); and RESPONSE is no longer provided in the default namespace for skin templates (see below). If you have DTML pages or customized skin templates using these things, they may need to be updated. If using a catalog, you should add the Title field to your metadata if not already present. Use of title_or_id is being phased out but you may want to leave it in your catalog until Zwiki 0.23 to be safe. General - page type cleanups and DTML changes: deprecated page types have been removed, as have the non-DTML-supporting STX and HTML types. DTML is now controlled as follows: a ‘no_dtml’ property (value doesn’t matter) on page or folder will disable Zwiki’s DTML functionality below that point. You can set this on the root folder to disable it server-wide. NB: non-CMF wikis, and CMF/Plone wikis which don’t use the zwiki_plone skin, are DTML-enabled by default for the moment. The zwiki_plone skin includes an empty ‘no_dtml’ method which disables DTML. To enable it in all your CMF/Plone wikis, you need to remove ZWiki/skins/zwiki_plone/no_dtml.dtml and restart zope, (or switch to a similar skin without that file). - the ‘allowed_page_types’ lines property is supported again. It does not prevent setting any type, but is used to restrict the types offered in the default edit form. All supported types are offered by default. The type of new pages, if unspecified, now defaults to the first of the wiki’s allowed page types. The ‘standard_page_type’ property is no longer supported. - zodb caching improvements for large wikis: many operations which used to load all or many pages into cache will no longer do so if a suitably configured catalog is available. This can greatly reduce cache activity and peak per-transaction memory usage for large wikis, improving performance and server uptime. Renaming and deleting are still relatively memory-expensive; more optimizations to come. To fully benefit, you should have a wiki catalog with meta_type, id, Title, path, canonicalLinks, isIssue indexes and id, Title, issueColour, parents, links metadata (IIRC). NB most wikis don’t need to worry about this. See for some more details. - the pages() method now returns catalog brains if possible, otherwise brain-like objects (if there is no catalog), rather than (expensive) page objects. It will also pass keyword arguments to the catalog, if there is one, so is convenient for searching among the pages of the current wiki (only). pageObjects() provides the old behaviour. - likewise, backlinksFor() now returns either catalog results or similar brain-like objects. Old custom backlinks templates should still work without upgrading, except in this (unlikely) case: when there is a catalog with meta_type, path, and canonicalLinks indexes but without page_url and linkTitle metadata. (with help from Magog) - the rendering code uses a simpler, more memory-efficient pre-rendering and pre-linking scheme. The relative_urls property is no longer supported. - DTML-enabled pages now invoke DTML only when there is code in the page, to avoid unnecessary parsing and reduce our memory footprint. - wiki link titles (last edited info in tooltips) have been disabled for the moment. - three new boolean properties are supported, on page or folder: use_wikiname_links, use_bracket_links, use_doublebracket_links, for configuring your linking syntax of choice (lightly tested). You’ll need to run /clearCache or /upgradeAll after changing these. - be smarter about choosing where to store uploaded files - when checking for the “uploads” subfolder, make sure it is a subobject of the wiki folder, and make sure it is folderish. - we support the STX :img: syntax, finally (IssueNo0601, etc.) - bare urls containing ; (semicolon) are now recognized (DeanGoodmanson) - renaming fixes: add or remove brackets when appropriate; when updating links, tolerate edit failures (eg due to regexp recursion); also replace changed ids - pages are now reindexed after being reparented, when a catalog is present - catalog lookup behaviour has changed: “By default, Zwiki looks for an object named ‘Catalog’ in this wiki folder (will not acquire) or a ‘portal_catalog’ (can acquire). If a SITE_CATALOG property exists (can acquire), Zwiki will look for an object by that name (can acquire); if no such object exists, or SITE_CATALOG is blank, no catalog will be used.” - new catalogId() method returns id of the catalog in use, or NONE. Requires ‘Manage properties’ permission. - hasCatalog() is now public - getPath() is now supported - Title() is now always provided and is equivalent to pageName(). Use of title_or_id is now deprecated; DTML pages and skin templates should use Title instead, to take best advantage of future catalog optimizations etc. - other misc. new methods: pageName(), summary(), size(), cachedSize(), cachedDtmlSize() - ChangeLog is no longer provided in releases for the moment, to simplify maintenance Skins and content - RESPONSE is no longer provided in the namespace for DTML Method skin templates; they must use REQUEST.RESPONSE (code simplification) - new methods recentchanges(), searchwiki(), useroptions() provide experimental skin-based implementations of these wiki pages, which can be used when DTML pages are not allowed. The included implementations are preliminary and unskinned (but customizable); they are DTML methods so as to reuse code from the evolving page-based implementations, but you can also use page templates. - if a SiteMap page exists, the “site contents” link in full mode will point there instead. Also contents() now accepts a ‘here’ page name argument to allow this page to control the “you are here”. See for an example. - default skin: reduce subject field width again to “fix” too-wide comment form - default editform: show the page rename field as in the plone skin (but display only if the user has rename permission) - default search field & SearchPage: use GET when searching, for more useful URLs - default, zwiki_plone editform: show the STX and HTML page types with “(+ DTML)” when DTML is allowed - default, zwiki_plone editform: clear the “sticky” last log note when creating a page - default, zwiki_plone subscribeform: fix other subscribed page links - zwiki_plone: initial cut of a wiki navigation portlet for Plone. (AlexanderLimi) - zwiki_plone: Added File Uploads Overview template (AlexanderLimi) - zwiki_plone: made the wiki editform explanation more clear. (AlexanderLimi) - zwiki_plone editform: make the convenient alt-s (save) key work again - zwiki_plone: fix links in ancestor list - content/basic/RecentChanges.stxdtml, SearchPage.stxdtml, UserOptions.stxdtml, and content/tracker/IssueTracker.stxdtml, FilterIssues.stxdtml: latest catalog-optimized versions from zwiki.org - content/basic/SearchPage.stxdtml: remove other search tools for now Mail and messages - fix message id inconsistencies preventing proper threading in mail clients (IssueNo0587) - mailout policy change: don’t mail out page rename and deletions with default ‘comments’ policy. Also don’t mail out when placeholder pages are created during rename. - don’t put brackets around mail sender’s name in message headings.. too clever - the ‘auto_subscribe’ option now automatically subscribes a page’s creator, as well as commenters. Also it’s possible to use this property on a page now. - the subscribe/unsubscribe methods now support a redirectURL REQUEST parameter, which makes them more flexible - fix unicode/UnixMailbox problem. Comments made on the place-holder page left after a rename would not appear, since the text was unicode. Removed the i18n for the moment so this works. - try a fix for non-formatted comment headers in certain timezones (IssueNo0552) Bugfix release. - support old Wiki Folders (IssueNo0571) - drop confusing allowed_page_types option - fix scroll to bottom after comment A number of important bugfixes & usability tweaks for wiki mail and the Plone/CMF skin, and preliminary optional Purple Numbers support (fine-grained linking). Upgrade notes Should be routine. Upgrading to this release is recommended. General - CMF/Plone: Wiki Folder is no more! use ordinary folders instead. NB this means you can’t create a pre-populated wiki for the moment; copy RecentChanges, UserOptions, SearchPage, HelpPage, ZWiki etc. from zwiki.org if you need them. - experimental Purple Numbers support for zwiki. A true ‘use_purple_numbers’ boolean property on the folder or page will cause persistent id numbers (node ids) to be embedded within source text and rendered as purple links (after your next page edit). These are usable (with quirks) on (non-DTML) STX pages, not yet on RST or WWML or the non-wiki page types. Performance impact when rendering large pages is unknown. Cf . (Mike Mell, Simon Michael, thanks to Eugene Kim & PurpleWiki) - fixed a pageWithFuzzyName error in BTreeFolders (IssueNo0535) - stxprelinkfitissue and wwmlprelinkfitissue page types weren’t being auto-upgraded (IssueNo0546) - use absolute urls in STX footnotes and comment headings so these links work in CMF/Plone (IssueNo0550) - use latest page types in new wikis, avoiding excessive “non-allowed page type” warnings in edit form (IssueNo0564) Skins and content - the zwiki product icon was not checked in to CVS as a binary file, which may have been causing problems for (even crashing!) certain web browsers in the ZMI (Alexander Limi) - zwiki_plone: close anchor tags in comment headings to prevent bogus hyperlinks in Plone - zwiki_plone: initial cut at breadcrumbs-style display of page parents (Alexander Limi) - zwiki_plone wikipage_footer: don’t show the add comment box if user doesn’t have add comment permission - default, zwiki_plone: make the editform’s non-allowed page type warning show the developer name again, not the end-user name - default, zwiki_plone: if a user edits a page again within 24 hours, re-use their last log note by default (idea: Dean Goodmanson) - default, zwiki_plone skins: editform code cleanups. The options used by the editform template are now ‘page’, ‘text’ and ‘action’ as described in the comments. ‘id’ and ‘oldid’ options are no longer used (but are still provided by Zwiki, for backwards compatibility with old templates). - zwiki_plone editform rename support: you can now rename a page via the edit form. A placeholder page will be left behind and links on other pages will be updated if possible (this may need to be made more robust). This ensures Zwiki-compliant page ids, makes wiki page renaming possible in plone and makes wiki page creation via the CMF/Plone content management interface work better. Also, as a special case, if you are editing the sole page in the wiki the page name field will default to “FrontPage”, which latest Plone will display by default. To support this, edit() and create() now also do a rename if a new name is passed in the “title” argument (so named for backwards compatibility). (idea: Alexander Limi) - zwiki_plone editform: clean up attributes/fix i18n for page type radio buttons (fix for IssueNo0549 ?) - zwiki_plone editform: in the heading, show whether you’re editing or creating (Alexander Limi) - zwiki_plone editform: fix a 0.19 bug which showed the originating page’s text instead of an empty field when creating a page (IssueNo0557) - zwiki_plone editform: make editform work with CMF again (IssueNo0560) - zwiki plone editform: move least-used widgets to the bottom Mail and messages - comment heading display tweaks - message number no longer displayed, date now links to more permanent url based on message id, email sender’s name is now enclosed in [] to link to personal pages - citations no longer cause HTML markup to be added to the page text - citations no longer have an extra blank line displayed before them - smarter paragraph filling in mail-outs: preserve citation prefixes and leave indented paragraphs alone - like TestPage, a page named ‘SandBox’ no longer sends mail to whole-wiki subscribers - display no messages, rather than an error, if unicode gets into the page (and document some of the ways this can happen) Simpler page types, smarter message handling, auto subscription option; mail, skin and miscellaneous bugfixes; python 2.1 or greater now required. Upgrade notes Page types have been simplified and renamed (both the end-user and developer names) as mentioned below. If you have standard_page_type or allowed_page_types properties on your wiki folders, you should update them to the latest types (in the left column below): I have dropped python 1.5.2 compatibility as of this release; this allows us to use more up-to-date python 2.1 features and means zwiki 0.20 and later won’t run on very old zope installations. The comment method’s arguments have changed. General - more page type simplifications: As shipped, Zwiki now supports four main wiki page types and three non-wiki page types:# wiki page types 'msgstxprelinkfitissuehtml' :'Structured Text', 'msgstxprelinkdtmlfitissuehtml':'Structured Text + DTML', 'msgrstprelinkfitissue' :'reStructured Text', 'msgwwmlprelinkfitissue' :'WikiWikiWeb markup', # non-wiki page types 'html' :'HTML', 'dtmlhtml' :'HTML + DTML', 'plaintext' :'Plain text', The wiki page types have similar features (message formatting, text formatting, wiki linking, pre-rendering, fit tests & issue properties) except for the choice of text formatting rules (and only the STX types allow HTML). A wiki will offer a restricted subset of these in the editform: Structured Text, reStructured Text, WikiWikiWeb markup, and HTML by default. To offer other types, set a custom allowed_page_types lines property on your wiki folder (or change ALLOWED_PAGE_TYPES in Defaults.py). - new message storage format: Comments are now stored as rfc2822-style messages in mbox format (think of a wiki page with an mbox stuck on the end). This looks more verbose in the page source, but allows rich message data to be stored in a standard way. Messages must begin with “strict” mbox-style ‘From ‘ separators. (If you ever need to add these, upgradeMessages may help.) Messages stored in this way are formatted for display with numbers and named anchors (#msgN, #msgidXXX, #messages). Message bodies are formatted with the usual text formatting rules. Message-id and in-reply-to headers are stored and threads are preserved when posting via web. This behaviour has been added to all the main page types, but some features require STX mode at present. - fix memory leak when printing tracebacks (IssueNo0536, Shane Hathaway) - clean up ‘rn’ line terminators if added by browser, in comment(). - fill paragraphs in mailouts, finally. Also wrap at column 70, not 78. (IssueNo0340) - try to preserve indentation in mailouts somewhat. Each paragraph will be re-indented as per it’s first line. This means indentation within a paragraph will be lost, and a paragraph with only it’s first line indented will become wholly indented. Hopefully this will be better than the old policy of throwing away all indentation. (IssueNo0538) - prevent duplicates when integrated with a mailing list, by excluding the address in the X-BeenThere header from mailout. (comment’s do_mailout argument replaced with exclude_address) (IssueNo0519) - a true auto_subscribe folder property will cause any commenters to a page to be subscribed there (authenticated/usernamed commenters, at least) - exclude a page named “TestPage” from whole-wiki subscription. Page subscribers will still receive mail from it. Tracker - supply default categories, severities, statuses lists for the issue properties form if they have not been defined. This should allow “IssueNoXXXX ...” pages to work in any wiki without additional setup. - tolerate whitespace in the issue_colours list Skins and content - zwiki_plone: added the last basic elements of the Plone styling, cosmetic change to footer, external edit link moved to header. (Alexander Limi) - skins/zwiki_plone/subscribeform.pt: was a conflict ... I manually resolved it... please test (Alan Runyan) - zwiki_plone: minor fixes: fix change email address button label, remove whitespace from comment form, remove stray code breaking editform - default: name error when changing email address in subscribeform, fixed (IssueNo0537, Laura Trippi) - default, zwiki_plone: make editform tolerate an out-of-date allowed_page_types property (if allowed_page_types contains type that no longer exists, we just list it with the others and rely on auto-upgrade to fix the page); also tolerate whitespace there - minor xhtml-compliance fixes - make skin template error messages more to the point Preliminary reStructured Text support, page types cleanup, skin bugfixes, customizable issue colours. Upgrade notes Nothing special this month. General - preliminary reStructured Text support, based on Andreas’ and Richard’s work. - page types cleanup. Retired a bunch of old types, added new ones. Fit tests and issue properties form support is now standard. - New allowedPageTypes method returns the allowed page types for the wiki, defined in Defaults.py or an allowed_page_types folder lines property. Only these types can be selected when changing the page type (except in ZMI). Note: at present a new wiki will not allow DTML-enabled types, so if a user edits one of the DTML pages like RecentChanges it’s type will change, disabling it. The edit form gives a warning in this situation. - when creating pages, don’t inherit page type from the parent - use either the wiki’s standard_page_type, the specified type, or the zwiki default. Should make administration simpler. - creationTime and lastEditTime now always return a valid DateTime - new page methods: ageInDays, lastEditIntervalInDays - standard_error_message was getting instantiated as a wiki page in CMF (IssueNo0510) - fix NameErrors due to missing imports Skins and content - start with no bookmarks by default - UI code cleanups, more useful error messages - use xhtml-compliant tag for uploaded images - default: replace DTML methods with page templates (IssueNo0508). Zwiki now requires Page Templates. - default: XHTML 1.0 compliance fixes for the default skin (IssueNo0399, Jordan Carswell) - default: subscribeform layout tweaks, remove extra title tag - default: fix editform action url - default: fix contentspage tal:contents typo - default: replace stray DTML in editform with TAL (IssueNo0506, Jordan Carswell) - zwiki_plone: remove the bookmarks & help links from the page footer Tracker - IssueTracker & FilterIssues dtml cleanups, robustness enhancements - Instead of highlighting open and pending issues, we now highlight open and pending non-wishlist issues more than 60 days old - issue colours code cleanup; issue colours are now defined in Default.py or issue_colours folder lines property Full Plone and CMF skin, miscellaneous fixes. Upgrade notes If you have a cmf_install_zwiki external method, refresh it after installing this release (by saving it, or restarting zope). The new zwiki_plone skin replaces the zwiki_cmf skin; although the latter is still shipped it will hopefully go away soon. An existing CMF zwiki should keep working as before. To change it over to the zwiki_plone skin, do this: 1. in your CMF site’s portal_skins Contents tab, delete the default and zwiki_cmf folders (and zwiki_orig, if you don’t use that) 2. in the properties tab, remove zwiki_cmf (and zwiki_orig) from all skins - run the install method again (CMFSITEURL/cmf_install_zwiki) Structured text headings will get bigger next time you save the page (or visit SOMEPAGEURL/upgradeAll to re-render all pages). General - structured text headings now start at H2 not H3 - when using regulations, don’t require change regulations permission just to comment (IssueNo0485) - be more robust displaying pages with old-style page ids that have not yet been upgraded (IssueNo0495) - use precise links in contents, rather than depending on standard_error_message (IssueNo0454, Leslie Barnes) - when saving a page via external editor/ftp/webdav, don’t require a space after the : in the safety belt/type/log headers. Was causing a name error with external editor + emacs whitespace mode (IssueNo0438, Andrew Burrow) - if there is an id renaming collision during upgradeAll, just log the error and continue (IssueNo0483) - edit conflict & lock dialogs were giving NameErrors - new method: ancestorsAsList Skins and content - a fully functional Plone (and CMF) skin, at last! The zwiki_plone skin provides most current features of Zwiki’s default skin, within the standard plone UI. It should also be functional for CMF sites, allowing zwiki_cmf to be retired. (Alexander Limi funded by Walt Ludwick, with Simon Michael, Sidnei da Silva, Alan Runyan) - skin code fixes; Zwiki accepts Page Templates, Filesystem Page Templates, and DTML Methods as skin objects, and will warn if any other type is found. - UserOptions was not honouring the redirectURL argument if just clearing cookies - tweaked blank subject comment heading format.. not happy with it yet - mailouts were failing when creating a page with no initial text - make comment mailout work when there is no subject_heading field (IssueNo0481, az` on irc) Tracker - when changing issue properties, add a user-supplied or default comment to the page recording what was done (DeanGoodmanson) Simpler page ids, faster performance and better memory efficiency, new general-purpose page type including tracker and fit support, more robust parenting, skin improvements, preliminary stylesheet support, code cleanups, doctest no longer used. Upgrade notes If you have pages with punctuation in the names, or zwiki tracker issues, you’ll need to change their id/name. See below. If you have a tracker in your wiki, you’ll also need to install the latest IssueTracker/FilterIssues pages and add a new ‘isIssue’ FieldIndex to your catalog. Also, in a large wiki you may find changing issue descriptions is much slower than before. See below. issuedtml and stxprelinkdtmlhtml pages will be auto-upgraded to the new stxprelinkdtmlfitissuehtml type. The CMF install method in Extensions has been renamed to Install.py. General - Simpler page ids Page ids are now always derived from the name (title). This improves performance, removes a source of confusion, but means pages can no longer have a totally different title and id (like tracker issues used to). Also, page ids no longer include punctuation, and fuzzy links now ignore punctuation. Pages with old-style ids should be renamed so that wiki links to them will work. You can upgrade these pages and update links throughout the wiki by visiting SOMEPAGEURL/upgradeAll. This may take a long time; you can watch progress in the debug log. It’s safe to run this more than once. Alternately, you can visit SOMEPAGE/upgradeId to rename and relink a single page. Some incoming links may break due to url changes; having the zwiki standard_error_message installed will help. - Improved performance and memory efficiency Link rendering is faster due to the new ids and code refactoring. (Unscientific ab test: roughly a 4x speedup rendering a normal page and 20x for a long one.) Also, zwiki is no longer so eager to load all pages into the zodb cache. Certain operations will still trigger this (search, rename) but in general use, a zope serving large wikis should now use less memory. This may also help your performance and reduce “first page render” delay. - new/improved page lookup methods: pages, pageIds, pageNames, pageIdsStartingWith, pageNamesStartingWith, firstPageIdStartingWith, firstPageNameStartingWith, pageWithId, pageWithName, pageWithNameOrId, pageWithFuzzyName. Use these in your DTML code when possible as they will be optimized. pageWith* now return a correct acquisition wrapper (fixes IssueNo0472). pageWithFuzzyName’s ignore_case has no extra cost now and is always on. The argument is left in place for backwards compatibility for the moment. Also this may have been acquiring pages, fixed. - new general-purpose page type incorporating issue and fit support: stxprelinkdtmlfitissuehtml. stxprelinkdtmlhtml and issuedtml pages are auto-upgraded to this. This page type displays an issue properties form if the page name begins with IssueNo. Also, any tables whose first cell begins with “fit.” or “fittests.” are run as fit tests at page view time. - rename improvements: support for upgrading to new-style ids; allow renames without sending mail; reduce unnecessary indexing; preserve creation info when renaming (IssueNo0398); be more tolerant of confused parentage; - Don’t leave orphans when deleting a page - cleaner catalog logging - all doctests have been converted to pyunit tests - Details of WWML changes in last month’s release (PeterMerel): - C2-compatible apostrophe embedding using minimal-munch - C2-compatible generation of indents from leading spaces - Intuitive fixed-width tables using | as a column delimiter. I purely hate the STX tables, which do everything except the most obvious formatting everybody wants. - Ability to use images as page names with [image-url] - Ability to generate blockquotes by using = instead of * - Ability to force a BR in definitions using \ - Probably some other things I’ve forgotten. Skins and content - the latest zwiki.org RecentChanges: fix a mispaced try which allowed errors, be less chatty about brute force/catalog, move note field next to page name, drop the “use table” option and the T issue indicator. - edit form layout improvements - lose the gray title background and use site logo on all “form” pages - Made the diff form skinnable: support a ‘diffform’ page template or dtml method accepting two arguments: ‘revA’ and ‘difftext’. - Made the contents view skinnable: support a ‘contentspage’ page template or dtml method accepting two arguments: ‘hierarchy’ (a html string) and ‘singletons’ (a list of html strings). - allow multiple space-separated wiki names in backlinks’ other parents field and the page management form (PeterMerel) - contents improvements: make “you are here” bold; list singletons at the bottom, if any. (PeterMerel) - “contents” is used more consistently and is the preferred new name for the map method. - page creation links now have class “new” (IssueNo0456) - preliminary CSS support: the default skin embeds a “stylesheet” object if present. - longish headings were not being stripped from mailouts - mail out comments with their original formatting (not the page diff) - don’t try to find a subject in comment body any more - don’t send mailouts from a page named TestPage, whether or not it has [test] subject; log discarded [test] mailouts CMF - cmf_install_zwiki.py has been renamed to Install.py (IssueNo0460) Tracker - Issue pages have a new naming scheme to conform with the new page ids: “IssueNoNNNN issue description”. upgradeAll will take care of this for you. Old IssueNoXXXX urls will still work if you have standard_error_message installed. You need updated IssueTracker/FilterIssues pages which can be found in ZWiki/content/tracker. A new ‘isIssue’ FieldIndex is required in the catalog. - changeProperties has been replaced by changeIssueProperties. This does a full page rename, with link updating, when changing an issue description (slow). - The ‘issuedtml’ page type is deprecated; the new combined ‘stxprelinkdtmlfitissuehtml’ type is used instead. Old issue pages will be auto-upgraded. See also HowToInstallAZwikiTracker. - If you use the new page type throughout, you can change an ordinary page into an issue by renaming it “IssueNo...” and vice versa. CMF skin updates, various mail tweaks to support mailing list integration, enhancements to comment behaviour, WWML, misc. bugfixes. Minor bugfixes and some changes to wikimail behaviour. Upgrade notes If you’re doing mailin, don’t forget to re-save your external method after installing this release. -. A mailin.py rewrite. Upgrade notes If you don’t use mailin, no need to upgrade. After installing this you’ll want to click your external method’s save button to update it. Note this version of mailin.py drops the special virtual host support. - a rewritten mailin.py with tweaked delivery rules and unit tests. Fixes IssueNo0376. - be more fussy about recognizing a recipient real name as a page name - bracketed free form names in the subject are now also recognized (it will use the first bracketed thing found) A “bugfix-plus” release, for IssueNo0385 (missing permissions). - finish the switch to modern security declarations, fixing IssueNo0385 and hopefully not breaking much else. - also send mailouts on page deletion and renaming Bugfixes, more solid CMF & Plone support, wikimail enhancements, skins re-organization. Upgrade notes If you have mailin set up and find replies going to the default page instead of the originating page, see IssueNo0376. CMF - rename CMFInstall.py to cmf_install_zwiki.py - CMF skin support. zwiki_cmf is a lightweight CMF skin for zwiki; zwiki_orig is a CMF skin that looks like zwiki’s standard UI - fix edit permission in CMF (IssueNo0366) - fix a timezones error when in CMF - make wiki folder & page management work in cmf/plone - catalog lookup changed: look for a CMF portal_catalog; look for SITE_CATALOG on page and containing folder only - when creating a Wiki Page in CMF, allow the standard_page_type folder property to control it’s page_type. - hide user options when in CMF - look up destination page in the recipient’s real name (first); mailouts will encode the source page in the reply-to’s real name. This should make addressing and replying more natural. - allow subject headings in mail and web comments. (To set the subject from a web comment, use an initial one-line paragraph that is bold (using ‘** or < b>’) and no longer than 100 characters.) Subjects are saved as edit log note and vice versa. - generate a mailout when pages are created - mailin page creation fixes - mailin: handle freeform page names - mailin: work even when default page is not found, as long as there’s at least one wiki page in the folder. - mailin: add some failure logging - look for ‘mailin_policy’ property instead of ‘posting_policy’; the latter is still supported but deprecated - don’t use < hr> in comment headings Skins and wiki content - clean out & reorganize templates and default content under skins, content - new unified RecentChanges implementation; use SITE_CATALOG, remove hard-coded zwiki.org General - wiki links to pages with accented names (and freeform names in general if standard_error_message is not present ?) were not working - parent wiki links were generating an error - workaround for zope 2.6 stx+dtml breakage (IssueNo0270) - fix for “some permissions had errors” (IssueNo0358) - fix title changing in ZMI Edit tab (IssueNo0280) - increase the added and removed lines truncation limits for diffs and mailouts (to 200 and 20 lines respectively) - ZWikiPage class initialization fix; fixes an event log file warning (and.. ?) - auto-upgrade WikiForNow pages - log messages at BLATHER priority instead of DEBUG, and don’t require the Z_DEBUG_MODE or ZWIKI_DEBUG variable any more - subscription now accepts CMF member usernames as well as email addresses - add zwiki web, add zwiki page forms internationalized; .po files & spanish translation updated (JuanDavidIbanez) - preliminary fit (framework for interactive testing) support (‘dtmlfithtml’ & ‘stxdtmlfithtml’ page types) - allow a ‘mail_subject_prefix’ folder Bugfixes, international page names, edit log notes, WikiForNow assimilation completed, CMFWiki integration (alpha). Upgrade notes See Regexps.py for notes on configuring international page names. When running large & extensively cataloged wikis, you might notice this version being more memory intensive than 0.10 (which itself may be more memory hungry than 0.9.9). This release should coexist with CMFWiki without problems. Otherwise the usual, see & for more. Misc - better support for (single-byte) international characters in page names & ids; enable out of the box, with or without a locale set up (IssueNo0257) - merge CMF support into standard ZWiki pages. See Extensions/CMFInstall.py. (alpha) (CMFWiki) (ChrisMcDonough) - locking fixes for external editor support (CaseyDuncan) - save an optional log note with edit (or type change, file upload, ftp PUT, page creation, deletion (via DeleteMe)) (WikiForNow) (KenManheimer) - allow page type to be set via FTP (WikiForNow) - Don’t allow completely anonymous invocations of rename/delete (IssueNo0235) - reindex after renaming - set “FTP access” permission, send manage_edit through wiki editing code the way WikiForNow does (IssueNo0243) - a couple of compatibility tweaks for running inside a CMF/plone instance - provide view & SearchableText methods, use portal_catalog if present, don’t use rule in comment headings. Default UI & wiki content - custom wikipage template was being called with the wrong context (IssueNo0225) - custom standard_wiki_header/footer methods were being ignored (IssueNo0228) - provide a traceback in html source when header or footer rendering fails - standard_error_message: fix missing ”; make it work when not in site’s root folder (IssueNo0250) - editform: move options down a row to handle long page names better; fix tab ordering - display last log note in history link title in header and at top left when diff browsing; use a form button to return from diff browsing - use ZMI widgets for page_type and NOT_CATALOGED (WikiForNow) Rendering - new “text + links” page type - wwml: some rendering fixes and convert spaces to tabs, described on ConvertSpacesToTabs (PeterMerel) - use new rendering code for issue pages (IssueNo0252,IssueNo0253) - enable stx bullet workaround for all zope versions (IssueNo0273) - filter blank items from mailout recipients (IssueNo0221) Compatibility - be more robust when upgrading timestamps (IssueNo0222) - be more robust with older zopes which may not have page templates (IssueNo0224) - don’t use prefix tag in add zwiki web form for compatiblity with old zopes (IssueNo0229) - be more robust reading version.txt.. zope-cvs doesn’t have one (IssueNo0254) - fixprops.py utility method for fixing up zwiki page properties API changes (summary) Nothing major. Pre-rendering for better performance, new freeform page names and fuzzy linking, WikiForNow regulations support (beta), page renaming & deleting, UI enhancement & simplification, better upgradability, page templates support, i18n started, many bugfixes & minor enhancements. Upgrade notes This release has renamed page types and a new render-caching mechanism, to which pages are upgraded automatically; full-featured UI defaults which will be used if you delete your custom DTML methods; and one new and one renamed permission. See & for more. Default UI - made built-in defaults full-featured (equivalent to current zwiki.org UI). - when adding a zwiki web, don’t instantiate dtml methods, rely on the built-in defaults instead - support page templates as well as dtml methods for editform, subscribeform, backlinks and for main page layout (a page template named wikipage will take precedence over standard_wiki_header & standard_wiki_footer dtml methods). The page body is passed to in wikipage as options/body. The following additional options are passed to editform: page, text, action, id, oldid. - built-in defaults are now read from the filesystem. Defaults for standard_wiki_header/standard_wiki_footer are still provided but no longer quite as up-to-date and probably will be deprecated. - consistent api for accessing UI methods/templates/defaults - many UI updates and simplifications in default page layout, editform, backlinks etc.; made UI options more flexible & robust - full/simple/minimal display modes; simple (no page hierarchy) is displayed by default - retired jumpsearch (make the search box always do a simple search) - added tooltips & access keys to most links & form elements - display a convenient page rename/reparent/delete form in the footer in full mode if the user has a username (and permissions) - external editor support - make backlinks dtml more robust (IssueNo0210); fixed a case where the parent checkbox didn’t show up; now uses a catalog if available to provide accurate fuzzy backlinks - upload permission was incorrect in editform (IssueNo0178) - removed “Show advanced edit form” option - ensure comment form depends solely on Add comments permission - fix white space in center of footer in NS 4.7 - don’t show “sp” for spacer image in text-mode browsers - a missing slash caused edits to fail in netscape 4.7 - added secret AnnoyingQuote edit link - better support for web robots - removed robot-excluding meta tag from standard_wiki_header/wikipage, added robot exclusion tag to editform/subscribeform/backlinks; use form buttons instead of links to keep robots out of page history etc. Default zwikidotorg content - fewer pages - new FrontPage - new HelpPage - updated RecentChanges, uses catalog if available - updated SearchPage with selected extra search tools - cleaned up UserOptions - provide a list of timezones, MoinMoin style (IssueNo00146) - a standard_error_message method handles nonexistent urls, with fuzzy and partial matching on page names - minor text cleanups in default editform & zwikidotorg content Rendering - Pre-rendering support: new page types (used by default) process text formatting rules and wiki linking rules once at edit time. This makes rendering of large pages much faster (possibly at the expense of quicker zodb growth). Also allows us to accurately catalog links. - Better freeform page names - square brackets now allow almost any page name, not just those with url- and zope-id-safe characters. Freeform names are converted to wikiname-like page ids and either may be used for linking, supporting co-existence of freeform and wikiname pages. - Fuzzy linking - square brackets also do fuzzy linking, ignoring whitespace and capitalization. - square brackets now link only to zwiki pages, not other zope objects or url paths - Wiki links now have title attributes/tooltips displaying the target’s age and last editor when you mouse over (except for freeform page names in contents hierarchy and parent context) - support for simple sub wikis/WikiAcquisition - wikilink targets can be acquired from the parent folder (or above) and are displayed with ../ prepended - don’t link WikiNames inside structured text links (IssueNo0190) or the content of html anchor tags (IssueNo0194) - allow stx tables to have + at the corners - the RemoteWikiURL tag is now case-insensitive - make accidental structured text footnote processing less likely; use more standards-compliant “ref” prefix for footnotes, like newer stx - don’t treat a bare url followed by : as a remote wiki link - make stx links work in non-html stx modes (IssueNo0193) - fix for IssueNo0186 (page rendering fails if it contains a freeform link containing unbalanced parentheses) - added (?L) to the regexps which use w or b to make them locale-sensitive. (EdwardKreis, AlexyKhrabrov, InternationalCharsInRegexps) - display “you are here” in page hierarchy Editing - strip HTML header & footer found in edit text - make post-stx HTML & BODY tag stripping more robust (EdwardKreis) - check for a webdav lock before saving an edit or entering editform - don’t highlight the last edit access key if it’s not hyperlinked - allow existing files to be re-uploaded (IssueNo0006) - don’t add a link for uploaded files if the page already has one - set last_editor properly during mailin (IssueNo0207) - recatalog pages after ftp/http/webdav put - abandon the old antidecap kludge - fixed the multiple subscription bug (IssueNo0161) - fixed page creation - make mailin tracker issue creation more robust - mailin in the context of a zwiki page was not working as advertised - include standard mailing list headers in mailouts - discard all mail that appears to come from a bot, and the common “out of office” auto responder replies - post comments/create pages using the sender’s real name or just the username component from their email address, not the full address (IssueNo0066, PieterB/SM) - fixed a bug in upgradeSubscribers which caused folder properties to proliferate Tracker - added a createIssue method which can replace the AddIssue & IssuePrototype pages - increased title’s max length from 100 to 200 when editing issues - when rendering the issue form, generate option menus dynamically from the issue_* properties (MikeFair) Compatibility - better auto-upgrading & backwards compatibility; allow auto-upgrading to be disabled via flag in Defaults.py; new upgradeAll method for batch upgrading & re-rendering - removed STletters dependency for older zopes - don’t pass stx header argument with zope 2.4.0 (IssueNo0152, TrevorToenjes) - added zope 2.5.1 to list of zope versions for unit testing - improved python1.5 support - improved zope version checking Misc - support WikiForNow-style regulations if use_regulations boolean folder property is true - add support for page renaming/deleting - tweak allowed suffixes for fs-based wiki templates - add more informative transaction notes - ZWikiPage.manage_changeProperties was non-functional - fix ZMI add zwiki page (IssueNo0187), make it equivalent to wiki page creation - spanish translation of initial i18n strings (J. David IbÃ�¡ÂÃ�±ez) - canadian french translation of initial i18n strings (JoannePlouffe) - the beginnings of i18n support, using Localizer if installed. A few strings (edit conflict, authentication errors) internationalized. - use last_edit_time everywhere in preference to bobobase_modification_time - debug logging of zwiki catalog operations (if Z_DEBUG_MODE or ZWIKI_DEBUG variables are true and STUPID_LOG_SEVERITY is <= -200) API changes (summary) - wikimail/editform/subscribeform/backlinks page templates will be used in preference to standard_wiki_header/standard_wiki_footer/ editform/subscribeform/backlinks DTML methods. The page body is passed to wikipage as options/body. These options are passed to editform: page, text, action, id, oldid. - page types renamed. Here is the current list of page types (see AllAboutPageTypes for details):stxprelinkdtmlhtml stxdtmllinkhtml dtmlstxlinkhtml stxprelinkhtml stxlinkhtml stxprelink stxlink wwmlprelink wwmllink prelinkdtmlhtml dtmllinkhtml prelinkhtml linkhtml dtmlhtml html plaintext - the RemoteWikiURL tag is case-insensitive - the OFS.ObjectManager.bad_id definition is used as the basis for generating page ids - some properties are now looked up by containment rather than acquisition context, eg subscriber lists ADDED:: ‘Zwiki: Rename pages’ permission use_regulations folder property zwiki_displaymode cookie ZWIKI_DEBUG environment variable addStandardLayoutTo age applyLineEscapesIn asAgeString backlinks canonicalId canonicalLinks canonicalLinks changeIssueProperties createIssue creationTime editform folder htmlunquote isMailoutEnabled isZwikiPage lastEditInterval lastEditTime linkTitle linkTitleFrom offspringAsList offspringIdsAsList pageWithFuzzyName pageWithName pageWithNameOrId relative_urls renderLinksIn standard_wiki_footer standard_wiki_header stxToHtml subscribeform upgradeAll urlunquote wikipage CHANGED:: ‘Zwiki: Recycle pages’ permission -> ‘Zwiki: Delete pages’ creation_time and last_edit_time are now ISO-format strings doLegacyFixups -> upgrade REMOVED:: zwiki_advancededit cookie Focus: unit tests, wikimail enhancements, general fixes and a permissions rename. Upgrade notes You’ll need to make a note of your zwiki permissions settings before upgrading and recreate them after this upgrade. You’ll find all the new permissions at the bottom of the security page, aside from the Add permissions which are unchanged. Also you may have dtml which checks for the old permissions and needs to be updated. For example, the wiki templates refer to the permissions by name in standard_wiki_header, standard_wiki_footer and editform. Features - zwiki permissions renamed. Details:'Add ZWiki Pages' (no change) 'Add ZWiki Webs' (no change) 'Zwiki: Add comments to pages' (was Append to ZWiki Pages) 'Zwiki: Change page types' (was Change ZWiki Page Types) 'Zwiki: Edit pages' (was Change ZWiki Pages) 'Zwiki: Reparent pages' (was Reparent ZWiki Pages) 'Zwiki: Recycle pages' (was Send ZWiki Pages to Recycle Bin) Zwiki also uses zope’s ‘Add Documents, Images, and Files’ (no change) - zwiki page ‘subscribers’ property replaced by ‘subscriber_list’ property; the folder also now exposes this in the ZMI. Upgrade notes: old pages will be upgraded as needed, this will affect the last-modified times. - new zwiki page properties: ‘creator’, ‘creator_ip’, and ‘creation_time’ - a ‘redirectURL’ REQUEST attribute can be used to control the destination after edit, append or comment. - the enclosing [] are displayed prior to page creation - relative paths within [] now display in their entirety (instead of just the last component) - UI and usability tweaks for the default subscription form; other page subscriptions are listed - the folder title is now used in html page titles, in mailout subjects and before the word “contents” in the page header. The old titleprefix method is no longer used. - wikimail: only comments (made via the “comment” method) are now mailed out by default; to mail out all edits as before, set a ‘mailout_policy’ folder property to “edits”. - wikimail: hide mailout recipients (if Lennart Regebro’s MailHostFix product is installed) - wikimail: mailout now requires either a ‘mail_from’ or ‘mail_replyto’ folder property. If ‘mail_from’ is present, always use that for the From: field. Otherwise, show the poster’s email address or user name. (closes IssueNo0122) - wikimail: increase wrap margin from 70 to 78 in mailouts - wikimail: add X-Zwiki-Version, X-BeenThere & Precedence headers - wikimail: basic loop protection - silently discard any incoming messages containing X-Zwiki-Version - wikimail: mailin can now create pages (JosYule) - wikimail: mailin can create tracker issues (see mailin.py) - wikimail: mailin.py now accepts mail only from subscribers (somewhere in the wiki) by default. Call with ‘subscribersonly=0’ or set folder property ‘posting_policy’ to “open” to disable. - wikimail: the default destination page for mailin can be configured with a ‘default_page’ folder property - wikimail: use only the first plaintext part from multipart MIME messages - wikimail: use virtual host monster to help direct messages if present (may have some imeme-isms ?) - wikimail: some other tweaks to mailin delivery rules intended to simplify mailin alias setup (see mailin.py) - zwikidotorg template: header/footer/editform UI updates; display subscriber count in the header; set comment headings on by default for pages named “IssueNo*” - zwikidotorg template: for site logo, use the folder’s ‘site_logo’ property/object or the default zwiki icon. - add zwiki web form: made this a little more robust; added support for template configuration wizards (if a form or script named TEMPLATE_config is found, redirect there to create the wiki) Bug fixes - DeleteMe should redirect to the first existing parent afterwards, now working again (IssueNo0008) - zwiki now coexists with structured text footnote references (yay!). [] will link to a matching footnote if there is one, otherwise it is treated as a wiki link (IssueNo0110) - don’t treat [] as wiki links if they contain characters which zope does not allow in object ids (IssueNo0090) - don’t html-quote international characters any more. If losing international characters due to edits by dumb browsers is now a problem for you, please follow up on IssueNo0004. (Taewook Kang & others) - make wikiname regular expressions a bit more international by using string.upper/lowercase. You may need to modify bad_id in zope’s OFS/ObjectManager.py also. (LaloMartins, Alexy Khrabrov) - fix for IssueNo0112, structured text pages have extra html & body tags (natesain) - parenting tweaked to work better with acquisition/subfolders (IssueNo0108) (robert@redcor.ch) - wikimail: stray html tags were being left in mailouts containing long quoted lines (IssueNo0087) - wikimail: don’t add extra blank lines in mailouts - wikimail: don’t send duplicates when subscribed to both page and wiki (IssueNo0055) - wikimail: calling mailin in the context of a page should always use that page for posting, now it does - wikimail: format quoted replies in mailed-in tracker issues, as is done with mailed-in comments (IssueNo0070) - zwikidotorg template & general: edit access control ui improvements. Help/subscribe links now always visible; edit/append links conditionally visible; viewing editform requires edit permission; misc. color & layout tweaks. - zwikidotorg template: fix scrolling to bottom of page after comment - zwikidotorg template: don’t include “set” links on backlinks, because they are vulnerable to robots - zwikidotorg template: show search and quote in header by default again - zwikidotorg template: removed the “preferred front page” option (IssueNo0120) - zwikidotorg template: adding an empty comment was giving an error (IssueNo0123) - tracker support: issuedtml page type: layout & colour scheme tweaks - tracker support: allow sorting by category/severity/status to give the expected order (see IssueNo0115) - tracker support: don’t list other page types when editing an issue page - tracker support: don’t inherit issuedtml page type when creating a new page from an issue page - Possibly fixed a bug with Add Zwiki Page permission. - unit tests overhauled, cleaned out and updated. Zwiki now follows the latest zope testing practices. - zwikidotorg template: remove references to old permissions - recognize , in urls (IssueNo0130) - zwikidotorg template: remove comment form border - fix bracketed path linking (IssueNo0139) - missed some old-style imports in the subscription code - - zwikidotorg template: removed SiteLogo capability & obsolete zwiki_homepage reference, added secret feature for minimalists. -) - permissions renamed (see above) - viewing editform() now requires edit permission - ‘subscriber_list’ page property replaces ‘subscribers’ - new page properties: ‘creator’, ‘creator_ip’, ‘creation_time’ - new optional folder (or page) property: ‘mailout_policy’ - new optional folder properties: ‘mail_replyto’, ‘posting_policy’, ‘default_page’ - ‘titleprefix’ property/method no longer used, folder title used instead - new optional REQUEST attribute: ‘redirectURL’ - sendMailTo() now takes a list of recipients, not a string - subscriberList(), wikiSubscriberList(), allSubscribers() now return a list - new page methods: allSubscriptionsFor(email), otherPageSubscriptionsFor(email) - zwiki_mailin.py external method renamed to mailin.py - new mailin() arguments: ‘subscribersonly’, ‘trackerissue’ - wikis subdirectory renamed to templates, and filesystem templates now use suffix to specify zope meta_type/zwiki page_type - zwikidotorg template: new optional folder property ‘site_logo’ - tracker support: new page methods: category_index(), severity_index(), status_index() - new page method zwiki_version() - auto-cataloging no longer requires DTMLDocumentExt and adding a zwiki page no longer gives “AttributeError: index_object” if it’s not present. (IssueNo0054). Zwiki uses the catalog specified by the SITE_CATALOG property, or “Catalog”, or none. - the add zwiki web form complained “list.remove(x): x not in list” due to missing ZWiki/wikis/DEFAULTS, fixed (IssueNo0043) - includes the experimental ‘issuedtml’ render method used by ZwikiTracker - zwikidotorg template: UserOptions was not displaying the username field properly - a somewhat important change that also went in last release: responsibility for generating the page header and footer has been moved into the render_* methods -) - zwikidotorg template: remove “BookMarks” page reference from footer - ack it’s going to be one of those releases :) zwikidotorg template: UserOptions page_type should be structuredtextdtml - another zwikiwebs change. The ‘Add ZWiki Web’ form) - lasttext and the diff methods accept one or more revision arguments (counted backwards from the latest revision). - the old html-format diff has been demoted to oldDiff. The new one colourizes textDiff’s output and adds whizzy navigation links for stepping through the edits. (Click on the page timestamp). - - zwikidotorg template updated to latest zwiki.org layout, example append_with_heading method added - mail subscribers now receive edits as well as appends - simple create method added to api - new page- & wiki-wide mail subscription mechanism (.../subscribeform) - wiki_page_url/wiki_base_url renamed to page_url/wiki_url; checkEditTimeStamp/editTimestamp renamed to checkEditConflict/timeStamp; old api kept for backwards compatibility - code cleanups, refactoring - the link to an uploaded file or image used to be !-escaped; this is no longer necessary - file upload now requires “Add Documents, Images and Files” permission (was “Add Documents, Files and Images”) - fixed zwiki_username_or_ip() so last editor username is saved again - moved zwiki web creation into core python product, so the manual ZWikiWebs import is no longer needed. The sample wikis are now shipped as individual zexp’s in ZWiki/import, and are automatically imported to /Control_Panel/Products/ZWiki at product startup. (Install your own sample wikis there as well). /Control_Panel/Products/ZWikiWebs can be deleted. - “add zwiki web” form updates - escaping remote wiki links with ! should now work - tests have been completely reorganized and updated - added custom __repr__ from CMFWiki - refactored code into multiple modules, following CMFWiki. Encapsulated some functionality in mix-in classes. - another stx fixup: a single letter followed by a period is no longer mistaken for a numeric bullet - fix for one of 0.9.3’s stx fixups: spurious html comments no longer appear in stx examples - exposed the “zwiki_username_or_ip” utility method, which given REQUEST returns a best guess for username (authenticated user, zwiki_username cookie, or ip address) - experimental “lasttext” and “diff” methods show the text of a page’s last revision and a concise diff with the latest - bare/noheader/nofooter flags can now also be passed as keyword args - from WikiForNow: wiki-linking is now inhibited within - <pre></pre> - <code></code> - structured text :: examples - structured text ‘’ quoted code - html tags. - append permission now works - tweaked the anti-javascript hack - tweaked the anti-decapitation kludge - added plainhtmldtml mode (DTML + HTML, nothing else) - creating/editing/deleting pages with eg spaces in the name has been broken for a while, it seems - made some fixes in this area - made the edit conflict message more helpful - relaxed edit conflict checking: if your username & ip address match the last editor’s, the timestamp will be ignored. In other words, you can no longer have an edit conflict with yourself. This means eg you can backtrack in your browser, edit and click Change again. This change may disable conflict checking amongst anonymous users coming through a proxy. - renamed the “username” property to “last_editor”, added “last_editor_ip”, made these read-only in the mgmt. interface. Existing zwiki pages are upgraded when viewed. dtml-var username is still supported for backwards compatibility, but deprecated; use dtml-var last_editor_or_ip by preference. - stx workaround: trailing blank lines no longer cause unwanted headings - stx workaround: initial word plus period no longer becomes a numeric bullet - stx workaround: whitespace after :: no longer prevents example formatting - stx headings on first lines were broken - fixed - fix for a 0.9.1 bug: with hierarchy display enabled, creating a new page from a top-level page gave “typeerror” - gopher: urls are now recognized - renamed {wiki,page}_path to {wiki,page}_url in 0.9.2 - added a bunch of wiki_{page,base}_url variants for testing purposes - about: urls are now recognized - fixed a potential DeleteMe error message caused by incorrect parents - allow standard_wiki_page to be defined as a folder property - added plainhtml render mode - HTML only, no wiki-linking - test suite updates, documentation - allow non-wiki paths in [] - image file names in [] are auto-inlined - use uploaded image size to help “add file/image” decide whether to inline - reset parents when they have become outdated/confused - folder attribute “standard_page_type” overrides type of all new pages - display new page name when creating a page - record username when creating a page - renamed wiki_page_url(), wiki_base_url() to page_path(), wiki_path(). The old names, used throughout existing dtml code, are supported but deprecated - made remotewikilinks more careful to avoid trailing punctuation - allow []-named pages in remote wiki links - disable structured text’s [] footnote linking to avoid conflicts - added append method - simple email notification (PageSubscribers) - made wikilinks absolute for greater robustness - header/footer layout tweaks - made last editor’s authenticated username override username cookie - proxy role tweak - added file/image upload - added more detailed permissions - refactored edit() - changed/reverted wiki_{page,base}_url as per Christian Scholz for virtual hosting - allowed https: urls - allowed + and $ in remote wiki links - added (experimental) page deletion - record IP address when username cookie is blank - make disabled javascript tags visible - added more unit tests - log last editor’s IP address if there is no username cookie - wrapped a bunch of long lines - unit tests - added support for ZUnit and DocTest, and a few initial tests. contains some useful recipes for automated testing. ZWikiWebs.zexp: incorporated latest zwiki.org tweaks, namely: - removed a spurious menu from the add zwiki web form - added UserOptions to the default BookMarks and removed it from the page footer - bookmarks, quote, search box and hierarchy may all be turned on or off in UserOptions - the default home page is now a user option - user options & search box tab ordering fixed - always show the editform even for write-protected pages, with appropriate header/footer color - page history is now accessible - simplified RecentChanges - fixed broken line-ending handling and non-rendering of initial lines containing ”:” in dtml modes - return to the wiki page after clicking the reparent button - added warning of incompatibility with old dtml methods to readme - cookie-based user options, including edit form size, timezone, bookmarks and wikiwikiweb-style username (help from Phil Armstrong) - ZWiki is now zope 2.2-compatible (Garth Kidd) and -requiring, and benefits from the 2.2 security model. Executable dtml pages now run with those permissions that are common to both the page-viewing user and the wiki web’s owner. Set the folder’s owner to limit the permissions of executable pages. - incorporated & updated Chris Withers’ product for creating wiki webs - added streamlined “hierarchal2” wiki style & other layout tweaks - wikiwikiweb-style late page creation - added simple javascript-disabling code - made paths work with virtual hosting again (Evan Simpson) - fixed unreliable ! line protection in structuredtext modes - fixed unreliable remote wiki links in classicwiki mode - more permissive remote wiki link regexps (Geoff Gardiner) - “with this” dtml kludge no longer needed - added built-in defaults for all dtml methods - simpler, more consistent urls & api - code refactoring/cleanups, other misc. bugfixes - wikinames must now start on a word boundary - added # and = to url regexp - try allowing numbers in wikinames - added utility methods wiki_base_url & wiki_page_url - added KenManheimer’s hierarchy & navigation code - added JimFulton’s edit conflict safety belts for http & ftp - added jim’s permission & validation patch - add & change zwiki page permissions are now functional - reorganized & expanded example content - deemphasised DTML-enabled content where not needed - changed pages to structuredtext where possible, restricted permissions on the rest, changed the default page type to structuredtext - simplified the default wiki content & page layout - disabled catalog support for the moment - new sample wiki, defaults to structuredtextdtml mode only - bare urls are automatically hyperlinked, others should be left alone - extensible markup modes - you can add your own render methods - code cleanups - tweaked markup modes for usability (see new TextFormattingRules) - made validation of newly-edited DTML more accurate - ZWikiPages are catalog-aware (mostly.. still some issues ?) - added RemoteWikiLinks - bracketed numbers are no longer wikilinks, so StructuredText’s footnotes can work - sample wiki: included latest DTML features from ZWikiWeb - SearchPage, JumpTo, AnnoyingQuote, separate edit page, etc etc. - multiple markup formats - structured text, classic wiki (TresSeaver), HTML/DTML, plain text - better international character handling (AlexandreRatti) - LFCR line-terminations are converted to LF - tweaked editform layout - source cleanup - renamed default_wiki_page to standard_wiki_page for consistency - ! at the beginning of a line protects it from wiki translation - now checks for valid DTML & reports errors - no longer requires “view management screens” permission - tweaked wikilink regexp - new icon - misc fixes - standard_wiki_header, standard_wiki_footer & default_wiki_page have built-in defaults; define as dtml methods to override - sample wiki: AllPages is a zwikipage; pagenames with spaces are listed properly - sample wiki: added a RecentChanges page - sample wiki: simplified
http://zwiki.org/docs/CHANGES.html
CC-MAIN-2017-13
en
refinedweb
The best way is to set static_url_path to root url from flask import Flask app = Flask(__name__, static_folder='static', static_url_path='')
https://codedump.io/share/friIThSOINI1/1/static-files-in-flask---robottxt-sitemapxml-modwsgi
CC-MAIN-2017-13
en
refinedweb
We are getting closer! Combining your input, speaker availability and their skills the list below is the result... Thanks, Chuck Developer Tools Track Take your software development skills to the next level with deep technical training that covers the best of Visual Studio .NET 2003 and .NET Framework 1.1, while preparing you for the future with Visual Studio 2005 and .NET Framework 2.0. Sessions cover language enhancements, web development with ASP.NET, IDE productivity features, Visual Studio 2005 Team System, Visual Studio 2005 Team Foundation Server, and more. Get in-depth information on building mission-critical software using Visual C++, Visual Basic, Visual C#, and Visual J#. Acquire skills that will make an immediate impact, while learning what you need to be ready for Visual Studio 2005. Title: ASP.NET 2.0: A Look Inside Security, Membership, Role Management and Profiles in ASP.NET 2.0 Abstract: Drill down on the new Membership, Role Management and Profile features in ASP.NET 2.0. See how ASP.NET 2.0 will enable developers to eliminate hundreds of lines of complex code today -- and build even more secure applications quickly. Learn how to dynamically store profile data about users and construct more dynamic and personalized sites that dramatically improve the customer experience. Level: 300 Australian Speaker: Paul Glavich New Zealand Speaker: Gabriel Smith [GabrielS@intergen.co.nz] Title: ASP.NET 2.0: Web Parts with ASP.NET 2.0 & Advanced topics Abstract: Drill down on the new Web Parts infrastructure in ASP.NET 2.0. See how you can use Web Parts to build rich Web sites that enable end users to dynamically control the layout and component contents of pages. Learn how this will interoperate with SharePoint Products and Technologies. Australian Speaker: Darren Neimke New Zealand Speaker: ? Title: ASP.NET 2.0: Best Practices for Building Web Application User Interfaces with Master Pages, Site Navigation and Themes Abstract: Properly integrating the powerful features of ASP.NET 2.0 into a high quality, professional site design can sometimes be tricky. This session covers specific best practices, tips and tricks, and other lessons learned from the beta cycle to help you more easily customize Master Pages, Themes, Site Navigation, and more, to build the most attractive and functional Web sites possible. Learn tricks about control properties, CSS, master page customization and nesting. See how themes can make it even easier for you to control and change the look and feel of your Web application. Don't miss learning the core skills and best practices for creating the most effective and attractive ASP.NET 2.0 experience possible. Australian Speaker: Dave Glover Title: ASP.NET 2.0: Building Data-Driven Web Sites in ASP.NET 2.0 Abstract: This session discusses the fundamentals of data access and how to render data ina. This session also covers aspects of Visual Studio 2005 Express and SQL Server 2005 Express for building data-driven Web sites. Australian Speaker: Adam Cogan New Zealand Speaker: Adam Cogan Title: Microsoft Visual Basic 2005: Under the Covers: An In-Depth Look at Visual Basic .NET in the .NET Framework 2.0 Abstract: Microsoft Visual Basic 2005 includes a new, highly customizable application framework that makes it easier than ever to develop powerful Windows smart client applications. Learn how to leverage the My namespace to enable application logging, custom user authentication, roaming settings, and programmatically access deployment features. Learn how to customize templates and snippets for reuse within your own solutions. This session also covers some of the advanced language features coming in Visual Basic 2005, including the ins and outs of generics, operator overloading, XML Doc comments, partial types, new asynchronous calling capabilities, custom events, and other advanced features of the VB 2005 language. Australian Speaker: Jay Roxe New Zealand Speaker: Jay Roxe Title: Visual C# Under the Covers: An In-Depth Look at C# 2.0 Abstract:, Anders discusses possible future directions for the C# language. Australian Speaker: Mitch Denny New Zealand Speaker: Derek Watson Title: Improved IIS Debugging: Understanding and Using and The Newest Tools and Theories for Debugging Web Applications Abstract: The key to successfully attacking problems in IIS applications is to understand the architecture of IIS. Although this session focuses on IIS 6.0, it will outline what administrators and developers should know to successfully debug Web applications. Learn the techniques used by seasoned debuggers at Microsoft, while we also introduce a new but powerful tool called IIS Debug Diagnostics. IIS Debug Diagnostics is a slick tool used to configure the right debug method based on the symptoms. It gathers data then analyzes the data providing administrators and developers the potential causes and remedies for the problem. This session also demonstrates how you can successfully build objects to extend IIS Debug Diagnostics analysis capability. Upon completion, you will be more in tune with the techniques used at Microsoft to debug applications. You can then use that knowledge to successfully deploy and use the new debug toolkit released from Microsoft, IIS Debug Diagnostics. Australian Speaker: Ken Shaefer New Zealand Speaker: Jeremy Boyd Title: .NET Framework: CLR Internals Abstract: Learn how the CLR works "from soup to nuts". Learn about the CLR's execution model including intermediate language, verification, JIT compilation, metadata, and assembly loading. Explore the runtime relationship between code, types, and objects; a thread's stack and the heap. See how the CLR's garbage collector knows which objects are in use and which are not, so that memory from unused objects can be reclaimed. After this session, you'll have a great understanding of how the CLR does the things it does. Level: 400 Australian Speaker: Mike Koenig? New Zealand Speaker: Mike Koenig? Title: .NET Framework: What’s New in the Framework for V2.0 Abstract: Learn how the .NET Framework 2.0 serves as the starting point for many of the great features you'll need to make your development experience faster, easier, and more productive. Learn about key .NET Framework 2.0 features exposed by the CLR, including the ability to host the CLR, new capbilities for network communications, ClickOnce, Edit-And-Continue (EnC), and 64Bit support. Whet your appetite for some of the many and varied new Base Class features, including strongly-typed resources support, and SerialPorts. Title: .NET Framework: The Pitfalls of Exception Handling: Why They Hurt, and How to Avoid Them Abstract: Proper exception handling is one of the more difficult aspects of programming for .NET. Developers of all levels struggle to understand proper techniques. This session describes some pitfalls and best practices for handling exceptions in .NET, with occasional "CLR Internals"-style drilldowns into the technical details behind the pitfalls. Level: Title: Visual Basic Tip Tricks and Other Advanced Topics Abstract: Australian Speaker: Nick Randolph & Bill McCarthy? New Zealand Speaker: Title: Developing Xbox 360 Games with XNA Abstract: This is a must-see session for Games Deveopers who are intererested in Games development. Xbox 360 is the next generation gaming platform and XNA is the graphics technology it will use to for these games. This session will walk through the tools and technologies required to build games targeting this new exciting platform. Australian Speaker: Tony Goodhew New Zealand Speaker: Tony Goodhew Title: Advanced Debugging PSS Lunch time Session Australian Speaker: Sean RyanZ? -Need to talk to him Title: What's New in Web Services Enhancements (WSE) 3.0 Abstract:. Australian Speaker: Nigel Watson New Zealand Speaker: Nigel Watson
http://blogs.msdn.com/b/charles_sterling/archive/2005/05/07/charles.aspx
CC-MAIN-2014-52
en
refinedweb
public class Naerling : Lazy<Person>{ public void DoWork(){ throw new NotImplementedException(); } } Naerling wrote:Buying my first house! Naerling wrote:who I now call my girlfriend Naerling wrote:I've been busy getting a life. OriginalGriff wrote:What does she call you? OriginalGriff wrote:Congratulations! Naerling wrote:a lot of 157m2 Naerling wrote:the house has about 340m3 inside Johnny J. wrote: Naerling wrote:the house has about 340m3 insideWhere the hell did you find so much space in the Netherlands? Naerling wrote:the house has about 340m3 inside Johnny J. wrote:Where the hell did you find so much space in the Netherlands? Naerling wrote:It has a lot of 157m2 and the house has about 340m3 inside Naerling wrote:I also met this wonderful girl recently who I now call my girlfriend Naerling wrote:Next to that I'm also still trying to do some exams for my study IT on the Open University. Naerling wrote:So to all those people who missed me (I know you did! ) _Maxxx_ wrote:Prinf("Stalker"); OriginalGriff wrote:Dyslexic.NET _Maxxx_ wrote:Congrats! _Maxxx_ wrote:It doesn't have a blue light on top, does it? _Maxxx_ wrote:If (you call her that to her face) then Congrats! Else Prinf("Stalker"); _Maxxx_ wrote:Hmm, girlfriend, study, study, girlfriend ... decisions decisions! Nagy Vilmos wrote:sober up! ___ ___ ___ |__ |_| |\ | | |_| \ / __| | | | \| |__| | | / Lun!" General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Lounge.aspx?msg=4591390
CC-MAIN-2014-52
en
refinedweb
23 August 2013 23:00 [Source: ICIS news] HOUSTON (ICIS)--Here is Friday’s end of day Americas oil and chemical market summary from ICIS. CRUDE: Oct WTI: $106.42/bbl, up $1.39; Oct Brent: $111.04/bbl, up $1.14 NYMEX WTI crude futures extended the previous session’s gains on pre-weekend buying. A rally in the gasoline futures complex in response to various refinery issues in ?xml:namespace> RBOB: Sep $3.0072 /gal, up 4.24 cents/gal Reformulated blendstock for oxygen blending (RBOB) gasoline futures settled higher on higher crude futures and reports of refinery outages. This was the first time the September contract traded above the $3/gal mark since it took over the prompt spot on 1 August. NATURAL GAS: Sep: $3.485/MMBtu, down 6.0 cents The front month contract on the NYMEX natural gas futures slipped back below the $3.50/MMBtu mark as the market corrected itself following Thursday's near 3% price surge. Bearish sentiment strengthened through the day on concerns over high production and low long-term demand, despite service company Baker Hughes reporting a slight fall in the number of drilling rigs being used across the US in its latest weekly rig count. ETHANE: lower at 24.75 cents/gal Ethane spot prices were slightly lower on a drop in natural gas futures trading. AROMATICS: styrene flat at 76.00-77.00 cents/lb Prompt styrene spot prices were discussed at 76-77 cents/lb FOB (free on board) on Friday, sources said. The range was flat from a week earlier as supply remains tight and trade participants kept to the sidelines. OLEFINS: ethylene traded lower at 54.5 cents/lb, PGP traded higher at 67.25 cents/lb US August ethylene traded three times at 54.50 cents/lb, lower than the previous day’s trade at 54.75 cents/lb, as supply concerns eased. US August polymer-grade propylene (PGP) traded at 67.25 cents/lb on Friday afternoon, higher than the previous day’s trade of 67.00 cents/lb but lower than an early-Friday trade at 67
http://www.icis.com/Articles/2013/08/23/9700299/EVENING-SNAPSHOT---Americas-Markets-Summary.html
CC-MAIN-2014-52
en
refinedweb
12 June 2009 17:32 [Source: ICIS news] TORONTO (ICIS news)--Shell has begun the due diligence process for two refineries in northern ?xml:namespace> Cornelia Wolber, spokeswoman for Shell Deutschland in The due diligence process was expected to take several weeks, if not months, she said. There were investors and firms interested in the refineries, Wolber said, but she declined to say how many and would not disclose any names. In March, Shell said it planned to sell its refineries in Shell’s refinery in Heide has a processing capacity of 4.5m tonnes/year, producing fuels and petrochemicals products. Shell also produces ethylene, propylene, benzene, toluene, xylenes at Heide, according to its website. The refinery in Hamburg-Harburg has a processing refinery of 5.5m tonnes/year. It produces fuels, base oils and waxes.
http://www.icis.com/Articles/2009/06/12/9224746/shell-in-due-diligence-on-germany-refinery-disposals.html
CC-MAIN-2014-52
en
refinedweb
28 April 2010 23:59 [Source: ICIS news] LONDON (ICIS news)--The European April caprolactam contract has increased by €160/tonne ($211/tonne) because of tight supply, higher benzene costs and strong demand for export to Asia, market sources said on Wednesday. “I have concluded my April contracts at an increase of €160/tonne,” a major caprolactam consumer said. “Suppliers tell us that ?xml:namespace> The April contract settled at €2,190-2,246/tonne FD (free delivered) NWE (northwest Another major buyer in On the selling side, producers were still citing strong demand in Asia and expected continued tightness in “Customers are asking for more product and asking about paying more to get additional material. On top of this we are getting requests from customers we don’t normally supply,” said a caprolactam producer. The producer said that on average, it increased April contracts by €160-170/tonne. ($1 = €0.76)
http://www.icis.com/Articles/2010/04/28/9354405/europe-april-caprolactam-rises-160t-on-tight-supply-benzene.html
CC-MAIN-2014-52
en
refinedweb
02 February 2011 19:00 [Source: ICIS news] WASHINGTON (ICIS)--Republican members of the US House and Senate will introduce a bill on Wednesday that would bar the Environmental Protection Agency (EPA) from regulating emissions of greenhouse gases (GHG), their offices said. Congressman Fred Upton (Republican-Michigan), chairman of the House Energy and Commerce Committee, was to jointly issue the proposed legislation on Wednesday with Senator James Inhofe of ?xml:namespace> Inhofe’s spokesman said that the bill would “prohibit the EPA from regulating greenhouse gas emissions under the Clean Air Act” (CAA). “The bill is a narrowly drawn, targeted solution that prevents the Clean Air Act from being transformed into a regulatory vehicle to impose a cap-and-trade energy tax,” the spokesman said. “The Obama administration will not be allowed to regulate what it has been unable to legislate,” the spokesman added. He was referring to the 2009-2010 effort to pass a comprehensive climate change bill in Congress that would have imposed increasing limits on US emissions of carbon dioxide (CO2) and other greenhouse gases. Although the US House narrowly approved a climate change bill in mid-2009, the measure never got any serious traction in the Senate. Instead, the EPA issued regulations under the Clean Air Act that, beginning this year, would restrict GHG emissions by US factories, refineries, power plants and other industrial facilities. Those regulations have been strongly opposed by the The EPA's plans to restrict emissions of greenhouse gases also are the target of multiple lawsuits by state governments and industrial groups. The Upton-Inhofe bill would be joining a half-dozen other measures introduced in recent days by Republicans and at least one Democrat member that would, in one fashion or another, delay or bar EPA’s regulation of greenhouse gases.
http://www.icis.com/Articles/2011/02/02/9431894/us-congress-moves-to-block-epa-rules-on-greenhouse-gases.html
CC-MAIN-2014-52
en
refinedweb
Qt aims at being fully internationalized by use of its own i18n framework [qt-project.org].creator [kde.org] already has a translation of Qt to your language and whether they would be willing (and legally able) to contribute it to Qt upstream. Even if not, somebody from the community [i18n.kde.org]. First, you need translation templates to work with. Qt uses its own TS (translation source) XML file format for that. There are several ways to get them: $PATHstarts with $qt5/qtbase/bin $qt5/qttranslations/translations perl split-qt-ts.pl <lang> make ts-<lang>in the $qt5/qttranslations/translationssubdirectory, where <lang>is the language (and optionally country) code. make ts-<part>-<lang>to update only a specific file. make ts-<part>-untranslated(or make ts-untranslatedto get all) and rename the file(s) accordingly. Do not qualify the language with a country unless it is reasonable to expect country-specific variants. You will also need to add the files to translations/translations.pro.> to dispose of the old strings. Next, you need to commit any PRI/PRO files you modified and the TS file(s) you translated. If you added new files, first run git add -N <files> (the -N is important!). Then run make commit-ts to check in the files (you should have no other modified files due to the use of language-specific ts targets). The commit-ts target will also strip out line number information from the TS files to keep the changes smaller. Finally, you need to post a change on Gerrit for review. The instructions are identical to the ones for Qt, except that the translations and various ts- targets live in share/qtcreator/translations. Qt Creator will not use the translation unless it finds one for the Qt library as well. Qt Designer, Qt Assistant and the Qt Help library should be translated as well, though failure to do so will go unnoticed at first. The infrastructure for that is somewhat lacking. Still, there is for example the simplified Chinese doc translation. OSX, unless we begin to support touchscreens. But OSX does not have multi-touch touchscreen support anyway, AFAIK. Touchscreens are commonplace, and we should try to have feature parity with the other touch platforms. There is also a complete listing of all pages..]]> Qt 5.4.0 was released on 10th of Dec 2014. The following pages has more information: New Features in Qt 5.4 Qt 5.4 Tools & Versions [qt-project.org] Issues to be fixed before Beta: Issues to be fixed before RC: Issues to be fixed before final:] Release team meeting 20.10.2014 [lists.qt-project.org] Release team meeting 10.11.2014 [lists.qt-project.org] Release team meeting 17.11.2014 [lists.qt-project.org] Release team meeting 24.11.2014 [lists.qt-project.org] Release team meeting 08.12.2014 [lists.qt-project.org] For almost every finished Qt project it is wanted and in most cases also required to carry extern resources. When porting the executable to other users on other computers that usually don’t have Qt installed, it is necessary to port the needed Qt libraries, too. Also, it is often necessary to include sound files, pictures, text files and other stuff to the executable. One common way is to pack all that stuff into a zip archive and hope that the target user will manage things correctly. Another, more complicated and maybe not in every case usable way would be to use the Qt resource management. For those of you who would like to handle import/export of every kind of resource very easily, I wrote a little “ResourceManager”. The obligatory algorithms are stored in two files: 1. resourcemanager.h 2. resourcemanager.cpp For easy creating a setup file by mouse clicking, I also wrote a user interface. This interface additionally requires the following six files: 1. mainwindow.h 2. mainwindow.cpp 3. myQDialogGetFile.h 4. myQDialogGetFile.cpp 5. myQDialogGetNewFile.h 6. myQDialogGetNewFile.cpp For basic usage of the ResourceManager, you have to create a new project (this will be the executable that will import the resources to your setup) and add the two obligatory files (resourcemanager.h, resourcemanager.cpp). In the main source file, you only have to specify the resources and pass them to the resourcemanager algorithm. There are two functions that can be used, depending on how the resources shall be passed: a) as a QStringList: b) from an existing resource table file (ini format): ResourceTable.ini: I strongly recommend not to call the resourcemanager algorithms manually, because there are lots of mistakes that can happen like typing a wrong resource path, using forbidden characters (e.g.g backslash) and so on. It is advisable to use the user interface instead. For using the interface, you only have to create a new project and import all files given above (resourcemanager.h/.cpp, mainwindow.h/.cpp, myQDialogGetFile.h/.cpp, myQDialogGetNewFile.h/.cpp), compile and run it. After the mainwindow is shown, you can select new resources from your computer and add them to the resource table, or you can load an already existing resource table (ini file). By clicking the button “import to library” you can import all files from the table to your specified setup file. You can also save the chosen resources as a new resource table file for later use. The only thing you have to modify in your setup file project is to add the two resourcemanager files (resourcemanager.h/.cpp) and place the following command into the main function (or anywhere else at the point of execution where resources should be exported): I would really appreciate it to get some feedback from you, so I can improve my own skills and make the algorithms better. Thank you in anticipation!]]> Usually applications have a Quit action in the context menu, The following code will show a question dialog to the user to make sure that (s)he really wants to quit. There is also an option to supress this message. To reenable this message you will need to clear the mainwindow/quitWithoutPrompt key trough the default QSettings. The following code can be used do do that:. The latest version of PySide is 1.2.2 released on April 25, 2014 and provides access to the complete Qt 4.8 framework. This Wiki is a community area where you can easily contribute, and which may contain rapidly changing information. Please put any wiki pages related to PySide into the the PySide category by adding the following text to the top of the page: Also, since PySide is sharing the wiki with other Qt users, include the word “PySide” or “Python” in the names of any new pages you create to differentiate it from other Qt pages..]]> The API Extractor was a library used by the binding generator component (Shiboken) to parse headers of a given library and merge this data with information provided by typesystem (XML) files, resulting in a representation of how the API should be exported to the chosen target language. The generation of source code for the bindings was performed by specific generators that were using the API Extractor library. In PySide 1.1.1 [pypi.python.org] API Extractor was merged into Shiboken, but docs on it are still available. The API Extractor is based on QtScriptGenerator [labs.trolltech.com] code. Linking of position based applications fails due to an incorrect CLASS_NAME. For more details and a temporary workaround see QTBUG-39843. The fix can be tracked via .. Note: If you built qt5 from the git repository and you get an error like ..then In your qt5 repository run:.]]> Currently, Qt5 is neither included in the BlackBerry 10 device software nor in the BlackBerry 10 SDK. However, Qt5 on BlackBerry 10 has reached a excellent level of quality and can be used for developing and publishing applications to BlackBerry World [appworld.BlackBerry.com]. There are currently two options how you can use Qt5 on BlackBerry 10: See sections below for more details. The Qt team in BlackBerry started to provide pre-built Qt5 as a Qt Project delivery. Packages are available here[Broken Link] [qtlab.blackberry.com]. The purpose of the overlay is to support Qt5 enthusiasts and save their time from building Qt5 from scratch. Most importantly, we would also like to get feedback from a broader community which will help to improve Qt5 on BlackBerry 10 in the future. Please go through the README[Broken Link] [qtlab.blackberry.com] to learn how to install and use the packages. The provided packages require the 10.2 Gold [developer.blackberry.com] version of the BlackBerry 10 Native SDK. After the installation, Qt5 is automatically recognized and configured in Qt Creator 3.0 (and later), is available on command line, and can be immediately used for application development. Even though you do not need an own build, you still need to pay attention to a few details described on this page. Please provide your feedback! This helps making it better! Please use the QtPorts: BB10 [bugreports.qt-project.org] component for BlackBerry 10 specific issues and other components for Qt generic issues even if you found them on BlackBerry 10. Consider visiting: Note: The overlay packages are not a part of the official NDK distributions by BlackBerry, but an add-on provided by the Qt Project. Be aware that you cannot mix Qt5 code with Cascades application framework APIs based on Qt 4.8. The Momentics IDE currently does not support Qt5 development. This is an option for advanced developers and Qt contributors. Most application developer will probably prefer not investing time in this. Please take a look on this page to get an overview of the status of Qt5 on BlackBerry 10.]]> See “Qt for Android known issues“ See “Qt for iOS known issues“ See “Qt5 Status on BlackBerry 10“ See “Qt Status on QNX“ Authors: Jaroslaw Staniek [qt-project.org] (staniek at kde.org), Tomasz Olszak [qt-project.org] . Feedback welcome! This HOWTO explains full process of installation and configuration Qt SDK for Tizen, needed for developing software with Qt for Tizen smartphone (developer device RD-PQ [wiki.tizen.org] and emulator), Tizen NUC [intel.com] which is reference device for Tizen Common profile and Tizen IVI devices. This process applies to Qt for Tizen Alpha 7 and has been tested with Ubuntu 14.04 64bit. Other versions have not been tested (feedback is welcome). Version of Tizen SDK is 2.2.1. Version of Qt While the process has been highly automatized, it consists of several steps. Following steps describe creating Tizen Common cross compilation tools. But you can also use the same steps for IVI (just change profile parameter passed to prepare_developer_tools script) Download official Qt installer suitable for your architecture. 5.4.0 version is placed in following remote directory: Make the binary executable (chmod u+x qt-opensource-linux-[x86|x64]-5.4.0.run), and install it in location writable by current user (write access is needed for tizen Qt Creator plugin) Be sure that you have bash as your shell. By default Ubuntu has dash. To change the shell to bash invoke: Use command line for most of steps explained here. Install development tools (such as gbs) as specified at [source.tizen.org]. Please note version of the distribution should be correctly specified when adding repository, so for Ubuntu 13.04: In /etc/apt/sources.list.d/tizen.list add the folowing line to add repository (please note space before “/”): All most recent Ubuntu and debian) versions are supported. See Then: and: this will show: Answer “y” here. This can show: Answer “y” here again. Clone the tizenbuildtools git repository: Even while Qt developers prefer using Qt Creator (or their beloved text editor plus command line) over Tizen SDK’s Eclipse-based Integrated development environment, at least two tools are still needed: These tools are bundled and distributed in a single Tizen SDK so it should be installed first. To do so, perform following steps: About 5 GB of free disk space is needed for these steps. Result: You should be able to: If you want to develop for mobile profile (Tizen Mobile device or emulator) you need to add author certificate. The certificate generator tool is a cross-platform command line tool used to generate developer private keys and certificates from their intermediate CA certificates. The private keys and certificates are used for signing and verifying Tizen applications. To create and register certificate in the Tizen IDE as explained here [developer.tizen.org]. After installing Qt SDK [qt-project.org] (by default to $HOME/Qt5.4.0 directory), follow the README file placed in tizenbuildtools/README. After that start Qt Creator. The Tizen plugin should be available. Follow this steps to build,. deploy, debug and run Qt applications on Tizen Common or Tizen IVI (or other Tizen profiles where ssh server is available on device) By default “app” user (used for development) doesn’t have a password, so if you want to connect using ssh you need to set password: All configuration steps listed below should be performed and in specified order. Compared to Qt 4, Qt 5 is highly modularized. Different Git repositories [qt.gitorious.org] hold the different Qt modules that developers can use in their applications. Some of these modules also encapsulate typedefs, enums, flags, or standalone functions within namespaces. The diagram below shows all Git repositories and modules that are part of the Qt 5.4 library. ]]>]]>). Qt supports many different platforms and operating systems. The people in this list have the final responsibility for Qt on a certain platform/operating system.. I’d venture to say that they’re at least a lot easier to use and handle than sockets :) [qt-project.org], compares the different approaches. The rest of this article demonstrates one of these methods: QThread + a worker QObject. This method is intended for use cases which involve event-driven programming and signals + slots across threads. The main thing.. (taken from [mayaposch.wordpress.com]) qtactiveqt: qtandroidextras: qtbase: qtconnectivity: qtdeclarative: qtdoc: qtenginio: qtquickcontrols: qtimageformats: qtlocation: qtmultimedia: qtsensors: qtserialport: qtwebkit: qtwinextras: A full guide on Qt for Python – PySide and PyQt. With PySide and PyQt Python code examples, tutorials and references. Authored by Jason Fruit who has worked with Python since 2000. In fact, he loves it so much, he even used it to name his children. This page lists Android devices we already have available for testing as well as devices it would be nice to get. Sorted by priority. What matters here is the SoC manufacturer, the CPU architecture, and the GPU model, devices are provided as an example. If a specific one cannot be found, we can get another one with the same SoC. All of them have at least Android 4.0 to be able to test as many things as possible.]]> The Qt 5 for Android project is based on Necessitas [necessitas.kde.org] , the port of Qt 4 to Android. This is an overview page to organize the development of the port. The Qt 5 for Android is already released in Qt 5 and instructions on how to obtain the necessary packages as well as view documentation and run examples, visit the Qt 5 documentation page Qt 5 Documentation [qt-project.org] Qt 5 for Android consists of several parts: 1. A platform plugin in $QTDIR/src/plugins/platforms/android 2. Java code which is built into a distributable .jar file containing binding code in $QTDIR/src/android/jar 3. Java code which is used as a template for new projects by the Qt Creator plugin in $QTDIR/src/android/java 4. A mkspec in $QTDIR/mkspecs/android-g++ 5. Some build files in $QTDIR/android 6. A plugin to Qt Creator which generates the necessary Java wrapper, manifests, build instructions, etc to develop and deploy on Android. This is in $QTCREATOR/src/plugins/android. If you have questions or suggestions to anyone working on this project, the easiest would be contact us on IRC. We are on #necessitas on the Freenode IRC servers. We also have the project mailing list: The project is currently in the regular Qt repositories in codereview.qt-project.org. Clone the repositories and check out the “dev” branch to try it out. For the first experimental release of Qt 5 for Android, we aim to have support for the modules in qtbase.git, qtdeclarative.git, Qt Sensors and Qt Multimedia. Here is the current status of the modules. Status of qtbase.git (Qt Core, Qt GUI, Qt Network, Qt SQL, Qt Test, Qt Widgets, Qt Concurrent, Qt D-Bus, Qt OpenGL, Qt Print Support, Qt XML) Status of qtdeclarative.git (Qt QML, Qt Quick) Status of Qt Sensors Status of Qt Multimedia We are compiling a list of devices where Qt for Android has been tested. If you are testing on a device which is not yet in the list (or if you have additional information), please update the page to include your experiences. There is also a list of test devices in the Oslo office. We are currently running autotests manually on devices to monitor progress. Automation is under investigation as well. Take a look at the current results. Remaining issues for Qt for Android [bugreports.qt-project.org] Remaining issues for Android plugin in Qt Creator [bugreports.qt-project.org] These are used to monitor the progress of the project. Bugs and missing features can be filed in https//bugreports.qt-project.org by setting component “QtPorts: Android” and “Android Support” in Qt and Qt Creator products respectively.]]> Montage du système de fichier distant (sshfs): mkdir /mnt/a20/ apt-get install -f sshfs autoconf libtool sshfs -o allow_other root@[targetAdress]:/ /mnt/a20/ git clone git://gitorious.org/cross-compile-tools/cross-compile-tools.git cd cross-compile-tools ./fixQualifiedLibraryPaths /mnt/a20/ /usr/bin/arm-linux-gnueabihf-g++ cd .. qeglfshooks_stub.cpp : wget qmake.conf : wget tar xvzf ressources/qt-everywhere-opensource-src-5.3.2.tar.gz cd qt-everywhere-opensource-src-5.3.1 cp -rfv qtbase/mkspecs/devices/linux-beagleboard-g++ qtbase/mkspecs/devices/linux-a20olimex-g++ rm qtbase/mkspecs/devices/linux-a20olimex-g++/qmake.conf cd .. cp qmake.conf qt-everywhere-opensource-src-5.3.1/qtbase/mkspecs/devices/linux-a20olimex-g++/ cp qeglfshooks_stub.cpp qt-everywhere-opensource-src-5.3.1/qtbase/src/plugins/platforms/eglfs/ ./configure opengl es2 -device linux-a20olimex-g++ -device-option CROSS_COMPILE=/usr/bin/arm-linux-gnueabihf -sysroot /mnt/a20 -opensource -confirm-license -optimized-qmake -release -make libs -prefix /usr/local/qt5a20 -no-pch -nomake tests -no-xcb -eglfs -v -skip qtwebkit-examples -skip qtwebkit -tslib && cat qtbase/config.summary make -j5 make install]]> Qt uses manual reference counting, ARC usage is currently prohibited. Rationale: ARC requires a 64-bit build. There is little benefit of using ARC for the 64-bit builds only since we still have to maintain the manual reference counting code paths. ARC can be used when Qt no longer support 32-bit builds. Qt does not patch or polyfill the Cocoa runtime using class_addMethod(). [There are some instances of this in Qt 5.3, but we do not want to expand the usage.] Rationale: Using this technique to work around OS version differences would be convenient. However, as a library we want to be conservative and don’t change the runtime. We also don’t want to introduce another abstraction layer in addition to the version abstraction patterns already in Qt. Use that instead of calling getters and setters. Cocoa and UIKit are doing the same. Use them in Qt’s own Objective C classes as well. (We need to check which compiler versions, that we support, still need the @synthesize directive + ivar declaration. Ideally, we should move away from that too). Rationale: This is how Objective C works nowadays. It also adds the possibility of adding properties to classes through categories. Finally, it will ease the transition to ARC when 32-bit support is finally dropped. We want Qt to work on new OS versions, including pre-releases. As platform developers we are not concerned about “official” support in the project, but would rather see Qt work right away. The main restriction is practical difficulties related to developing and testing on pre-release OS versions. Dropping support for old versions is done on a semi-regular basis. Inputs are There process of dropping support is gradual: Gradual loss of quality is not a goal. QWebKit/WebEngine lives its own life depending on upstream support We would like to move to a “the only version is the current version” world. (This is currently more true on iOS than OS X.) In general, if we support a given OS X version as a development platform then we support the most recent Xcode available for that version. Refer to the Xcode wikipedia article [en.wikipedia.org] for a versions and requirements overview. Both OS X and iOS set Q_OS_MAC. OS X sets Q_OS_OSX. iOS sets Q_OS_iOS. On the Qmake side this corresponds mac, osx, and ios. Don’t use Q_OS_MACX. OS X and iOS share as much code as possible. This is done via the the static platformsupport library. Use [NSApplication sharedApplication]. Rationale: While a little longer to type, [NSApplication sharedApplication] is type-safe an can save us from compiler errors. Qt binaries are forward compatible: Compile on 10.X and run on 10.X+. Qt binaries are backwards compatible: Compile on 10.X and run on prior versions. How far back you can go depends on the Qt version in use. This is accomplished by compile-time and run-time version checks (weak linking). Grep for MAC_OS_X_VERSION_MAX_ALLOWED and QSysInfo::MacintoshVersion in the Qt source code to see the pattern in use. There are basically three types of branches that we use in Qt Creator development We don’t create branches for patch releases. The minor version branch is regularly merged up to Master. Feature branches can be created for the development of code that the corresponding maintainer deems useful and potentially fit for merging into master when it is ready. Reasons for creating a feature branch instead of developing on a separate git repository, and posting the complete patch for review&merge after it is finished, can include but is not limited to Life time of feature branches is at the maintainer’s discretion, i.e. if a feature branch is created at all, and when it will be removed again (e.g. when the code is ready for merge, or when the code is no longer developed). Feature branches follow the naming wip/short-but-descriptive-name and should be announced on the mailing list. The release branches go through several states, which are described here in chronological order: The Freezing state starts shortly before the release candidate, and continues through the final minor.0 release, and a potential patch release. After that, the branch is put into Frozen mode. It can be defrosted by the Qt Creator maintainer if need arises, but that would be only in exceptional circumstances. After all, Qt Creator’s release cycle is tight and the next minor release not far away.]]> Covering Qt5 only and so not mentioning “5”, just “Qt”.. See the list of know issues for BlackBerry 10 for now. Most of them are applicable. More details will be provided by time of the Qt 5.3.0 final release. Please use the the QtPorts: QNX [bugreports.qt-project.org] component for QNX specific issues and other components for Qt generic issues even if you found them on QNX Neutrino OS. BlackBerry and QNX run a Jenkins based CI system which conducts build and on-device unit tests:]]> English Spanish Italian Magyar French [qt-devnet.developpez.com] Български QML provides several mechanisms for styling a UI. Below are three common approaches. QML supports defining your own custom components [qt-project.org]. Below, we create a custom component TitleText that we can use whenever our UI requires a title. If we want to change the appearance of all the titles in our UI, we can then just edit TitleText.qml, and the changes will apply wherever it is used. In this approach we define a Style QML singleton object that contains a collection of properties defining the style. As a QML singleton, a common instance can be access from anywhere which imports this directory. Note that QML singletons require a qmldir file with the singleton keyword preceding the Style type declaration; they cannot be picked up implicitly like other types. In this approach, we have a Style singleton that is used by our custom component. If you need to nest QtObject to access more defined properties (i.e. border.width.normal) you can do the following:. Currently, Qt documentation is hosted at three sites: Here are the basic steps to help you get started contributing to the Qt documentation: Qt’s documentation tool is QDoc. QDoc scans through the source and generates html pages regarding the classes, enums, QML types, and other parts of the reference documentation. To get started, the QDoc Guide [doc-snapshot.qt-project.org] explains how QDoc generates documentation from QDoc comments. The process for submitting a documentation patch is the same as for source code. For more information, read the Code Reviews page. For language reviews, documentation reviews, and technical reviews, you may add any of the relevant maintainers as reviewers as well as the following individuals: For language reviews (particularly for non-native English speakers) only, you may also add any of the following individuals: For documentation help, join the #qt-documentation channel in Freenode.] The organization and development of Qt 5 documentation is covered in another wiki: Qt5DocumentationProject The Qt Documentation Structure page provides information about the structure of the documentation.]]> ~/.4 branch):]. Up here in the calm and peaceful hills of Norway, we have been browsing the web during the long, dark winter nights. We found resources ranging from small how-tos to extensive tutorials, from beginner forums to places for exchange between experts. The choice is yours! If you know about a site not listed here or happen to run your own community, feel free to add it. Qt is present on Freenode [irc.freenode.net] If you want to meet Qt developers and fellows in person please join Qt Meetups. Check the provided link for details and find out a Qt community near you!]]> If single required QtQuick 2.0 applications require OpenGL 2.1 or higher to run. Windows only provides OpenGL 1.1 by default, which is not sufficient. One workaround is to use ANGLE which implements the OpenGL ES 2 API on top of DirectX 9. The benefit of using ANGLE is that the graphics are hardware-accelerated but ANGLE requires Windows Vista or later. The other option is to use a software renderer such as Mesa which can run on Windows XP or later and may perform better than ANGLE when running inside virtual machines. This article describes the process of cross-compiling Mesa for Windows on Arch Linux. The result is an opengl32.dll DLL that you can copy to the folder containing your application’s executable to use Mesa LLVMpipe software rendering. Cross compiling is currently the only method that can compile the latest versions of Mesa for Windows using GCC. Compiling Mesa natively on Windows using GCC with Scons results in “the command line is too long” error during linking. There are known issues when compiling Mesa with optimizations using Visual Studio 2010. Compiling with Visual Studio 2012 is possible but requires Windows 7 and the default platform target does not support Windows XP. It is possible to cross compile using Cygwin or MSYS2 Scons. Prebuilt binaries for Mesa are available from the MSYS2 project: Note: Use of the strerror_s function is disabled by writing an entry to config.cache for Windows XP compatibility. You can now copy opengl32.dll from the ~/mesa_win32/dist folder to the folder containing your application’s executable. Your Qt must be compiled with Desktop OpenGL to use opengl32.dll. The ANGLE build will not load the mesa library. Note: Use of the strerror_s function is disabled by writing an entry to config.cache for Windows XP x64 compatibility. You can now copy opengl32.dll from the ~/mesa_win64/dist folder to the folder containing your application’s executable. Your Qt must be compiled with Desktop OpenGL to use opengl32.dll. The ANGLE build will not load the mesa library.:). Qt Creator supports compiling with a MinGW toolchain out of the box. There are actually different MinGW toolchains and packages available: MinGW.org [mingw.org] is the original project. The latest version gcc 4.7.2. It only compiles for 32 bit binaries. MinGW-w64 [mingw-w64.sourceforge.net] is a fork with the original aim to also support generation of 64 bit binaries. By now it also supports a much larger part of the Win32 API. The MinGW-w64 project does host several different binary packages, done by different people. There are binary installers targetting MinGW for both Qt 4 and Qt 5. Up to Qt 4.8.6, Qt 4 ones are built with a MinGW.org toolchain using gcc 4.4. Newer Qt 4.8 binary packages ship with a mingw-w64 based toolchain. For Qt 5, a newer MinGW-w64 toolchain is actually required. This error occurs if object files / libraries being linked are compiled with different versions of mingw. The following steps can fix a problem: mingw32-make distcleanin order to remove all object files that was compiled with different mings versions. LIBRARY_PATHenvironment variable, for example set LIBRARY_PATH=c:\qt\2010.04\mingw\lib. gcc linker have a very complicated library search algorithm1 that can result in wrong library being linked (for example, mingw can find installation of strawberry perl in PATH and use it’s library). 1 Mingw wiki about library search problems [mingw.org]]]>.]]> tests/manual/debugger/simple/simple.pro and tests/manual/debugger/cli-io/cli-io.pro provide the needed code TBD]]> The Qt Multimedia module is supported on iOS, but there are some limitations in what is supported, and some additional steps that need to be taken to make use of the support. One of the unusual things about the iOS port is that everything must be compiled statically due to the Apple policy of dis-allowing share object files to be bundled with applications. It is not a problem to statically compile the Qt modules (including Multimedia) and link them into your application, but most of QtMultimedia’s functionality is provided by plugins. These plugins also need to be statically linked into your application, and to do that you must add a line to your qmake project file. To access the low level audio API’s on iOS you need to statically). i. iOS has basic support for capturing images and videos via the builtin camera devices.. iOS also supports. It may seem cumbersome to have to manually link these backends into each of your Qt applications, but the alternative would be to always include all 4 backends, which would unnecessarily increase the size and memory footprint of your iOS applications. This same issue exists in other modules that derive their functionality from plugins (ex QtSensors), so make sure to keep this in mind when building applications for iOS (or other platforms where you are statically linking your application).]]> Hi everyone! In this article I’ve tried to explain how to use Mac OS X Share API in your Qt Quick app. Share API was implemented in Mac OS 10.10 (Yosemite). As you know all frameworks in OS X which you could use in your application created with Objective-C/C++. That’s why you can’t use framework in your C++ classes. You need to rename your .cpp file to .mm and then you can call Objective-C code. But sometimes you need to call methods with arguments with Objective-C. As you know in .h file you can’t call Objective-C code. You must create Objective-C class which will call all needed methods to work with target framework. First you need to create class with all methods in Objective-C/C++ to use Share API. To do this you need follow a few steps: Second class which you need to create is a class to call from C++. This class will call Objective-C/C++. To do this you need follow a few steps: Import the new type to your QML file where you need to use share logic and add this code: This Share item can share only text and link. If you need to share Image you can implement special logic to convert QImage to NSImage. And added a new parameter to method: shareCurrentContent().]]>.: Plugins do not necessarily implement all possible features and different backends have different capabilities. The following tables give an overview of what is supported by each backend in Qt 5.4. Audio backends implement QAudioInput, QAudioOutput, QAudioDeviceInfo and QSoundEffect Here is the list of the current audio backends: Only m3u is supported at the moment.]]> New Features in Qt 5.5 Qt 5.5 Tools & Versions [qt-project.org] If this page is in a crude state, it’s because it’s a quick brain dump of experiences taking a simple (QQuickView + QML, minimal UI) app from Linux & OS X Desktop to iOS, and written in the hope it will be useful to other developers with Qt/Desktop experience who are quite unfamiliar with “mobile” and iOS and Xcode. Please feel free to edit mercilessly. This page doesn’t (yet) describe App Store submissions. (If you’re happy just programming Xcode’s iOS device simulators then you just need a Xcode and a Mac running OS X; there’s no need to register as an iOS developer for that. Be warned the simulator performance characteristics are significantly different from real HW!) Assuming you have a Qt project which builds for desktop from a qmake .pro file, you should be able to get an iOS build by: Useful settings (can be got into the qmake-generated Info.plist using sed/regexps/xslt transforms… whatever works for you): Additionally/alternatively Apple provides a useful utility /usr/libexec/PlistBuddy for modifying these files e.g Stuff you need to know: The workflow described above refers to deploying an app to a USB-attached iOS device. If you want to send an “installer” for an app to someone remote with an iOS device (NB must be one which has your “team provisioning profile”): The Qt Project has a meritocratic structure and it is important to know who does what. Find the details at The Qt Governance Model. There are many areas and roles that still need to be documented. See Infrastructure. Not officially constituted, but see Marketing.]]>.]]> English Italiano German Spanish Русский Magyar ಕನ್ನಡ These are third party add-ons and libraries for Qt: Third party add-ons and libraries under open source licenses: Third party add-ons and libraries under commercial licenses:]]>
http://qt-project.org/wiki/Special:Recentchanges_Atom
CC-MAIN-2014-52
en
refinedweb
_InjectDll UDF #41 Posted 25 October 2008 - 10:01 PM here you can find dll and this is forum thread for how to use dll functions Regards Tip MsgBox_Tipped: Eye candy msgboxes/inputboxes/loginboxes. | CreateBlankBox: Semi-transparent layers with borders and rounded corners. #42 Posted 29 January 2009 - 04:45 PM Im trying to make little script for inject process you want with dll, I tried the main example and another example and its not working, it return always error -7 (Write memory error) #include "_InjectDll.au3" $ret = _InjectDllByPid(ProcessExists('calc.exe'), '7-zip32.dll') If @error < 0 Then MsgBox(16, $ret, @error) Else MsgBox(64, $ret, 'Injected !') EndIfoÝ÷ Ø Ý~í+Å©©èën®{®º+â^èqë,zºè«¢+ØÀÌØíÉÐô}%¹©Ñ±°¡]¥¹Ñ! ¹± Ìäí ±Õ± ÑÉ¥Ìä줰Ìäìܵé¥ÀÌȹ±°Ìäì¤(ÀÌØíÉÈôÉɽÈ)%ÀÌØíÉȱÐìÀQ¡¸(5Í ½à ÄØ°ÀÌØíÉаÀÌØíÉȤ)±Í(5Í ½à ØÐ°ÀÌØíÉаÅÕ½ÐíeU@ÌÌìÅÕ½Ðì¤)¹% Cheers, FireFox. Edited by FireFox, 29 January 2009 - 04:49 PM. ! #43 Posted 29 January 2009 - 06:30 PM #44 Posted 25 September 2009 - 07:09 PM #45 Posted 04 February 2010 - 04:34 PM im also getting the -7 write memory error... i've tried substituting the one provided with another one from a different UDF but it doesnt change anything.. does anyone know why its not working? I think i got it fixed. Change both of the "int_ptr" items in the injector to "ptr*" JD #46 Posted 04 February 2010 - 08:00 PM Edited by wraithdu, 04 February 2010 - 08:03 PM. 0 user(s) are reading this topic 0 members, 0 guests, 0 anonymous users
http://www.autoitscript.com/forum/topic/26831-injectdll-udf/page-3
CC-MAIN-2014-52
en
refinedweb
Originally posted by Burkhard Hassel: Howdy ranchers, you may find it interesting that these "forbidden chars" don't even compile in a comment: public class Trap { public static void main(String args[]) { // a comment that doesn't compile '\u000a' System.out.println("ready"); } } this class does not compile. Yours, Bu. Originally posted by Raghavan Muthu: ... Since it appears within the comment, the presence of such character really does NOT make sense right? Originally posted by marc weber: It might make sense. For example... //Use \u000A for new line But it won't compile. Himai Minh wrote:A character is actually a 16-bit integer. char c = '\u0061'; int i1 = c; You can assign an integer to a character. You can also assign a float to an integer. The float has 64-bit. A character has 16-bit.A float has enough space to hold a character.
http://www.coderanch.com/t/264507/java-programmer-SCJP/certification/char-compiling
CC-MAIN-2014-52
en
refinedweb
Type: Posts; User: Alin Wow! Hey guys, this thread is still alive after all these years! Good stuff. That did the trick, thanks very much. thanks for the replies. i still remember the ON_COMMAND signature is void ;)), but at the same time my function returns BOOL and i need it to be that way. is there a cast i could try to make it... I've been out of the VC world for years now. I'm trying to build an old project and I'm getting this error: Error 2 error C2440: 'static_cast' : cannot convert from 'BOOL (__thiscall... okay, thanks guys, i'll have the developers have a closer look. No, it's not that ... there is enough RAM on that machine. VM also looks ok. What are some of the usual suspects when these errors are surfacing? Okay, it's been a while and this has nothing to do with VB programming. During performance testing, we are getting the following error displayed in the browser: Microsoft VBScript runtime error... ::SendMessage (WM_USER_MSG, ...) from A to B. :d:d:d very nice, i like it. it could be used in a commercial so i don't understand why aren't you designating the full path yourself. something like: L"C:\\Temp\\TestDir\\" huh? oh, come on, maybe he was serious...:D i think WM_SETTINGCHANGE could be of help. send it after you've modified the registry. i cant believe how lazy you are i only corrected the first if. if (((pos_wid >= 210) && (pos_hei <= 297)) || ((pos_wid <= 420) && (pos_hei >= 297))) { cout << "Use A4" << endl; } have you tried using GetAsyncKeyState, GetKeyState or GetKeyboardState? 4/2 = 2/1 = 2 :d check your "how do I...." posts. they'll be a good refresher. Xeon, I refrained - so far - making any comments, but what the hell happened and why did you left programming in the first place?! I remember when I joined CG in 2002 you were so into it, good at... the application would know if you associate a member variable with your edit. you can either use class wizard to do this, or do it manually. if you decide to use class wizard, open your dialog that... right...well the OP could customize it according to his needs. you could remember the age in a vector and just sort the vector. if you can't use the sort function, you could write one of your own. #include <vector> #include <algorithm> int... search the FAQ's - you'll find your answer we need to understand what is the exact behavior that you are after and what part of that is not working.
http://forums.codeguru.com/search.php?s=87922d3b75a45979d054fff9a1a46273&searchid=5799037
CC-MAIN-2014-52
en
refinedweb