text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
Generally, storing logs in the main DB is completely fine. However, if you need to build some complex analytics on top of that or the record count is huge, there might be better solutions available (InfluxDB, Cassandra, Couchbase,...). 5th May, 2017 HRcc left a reply on Ideas To Store Record Activity Logs? • 2 weeks ago Generally, storing logs in the main DB is completely fine. However, if you need to build some complex analytics on top of that or the record count is huge, there might be better solutions available (InfluxDB, Cassandra, Couchbase,...). 13th April, 2017 HRcc left a reply on Class 'Tests\Unit\Mockery' Not Found • 1 month ago You have to namespace it properly \Mockery. 15th March, 2017 HRcc left a reply on Why Laravel Dropped Elixir? Why Forcing Vue.js? What Is It That You Are Trying To Fix? • 2 months ago @jlrdw Why would you think that recommending Vue confuses newcomers? If they can't open Vue docs or Vuecasts to understand the goals of Vue, they have much bigger problem than being new to the dev world/framework. The same applies to Bootstrap or Eloquent. It shouldn't matter if some of them think that it's 100% required. It just means, they should learn more and understand the technologies separately, before trying to combine them. I honestly think that having a recommendation is much better for newcomers than doing research about React, Angular, Ember, Vue, Aurelia, ... if you just want that damn form to be posted asynchronously. If you've used Laravel for a while and had a need for it, it literally takes 2 minutes to swap Vue with React and Bootstrap with Bulma. I've been working with Laravel since version ~3, and I can't remember a feature that I didn't like even if I had no use for it at the time of each new release. But for whatever reason, lately, people have been going crazy about how open-source frameworks keep adding/improving features, and these are forced upon us -- poor developers. Guys like Taylor, Evan, TJ and many others are working on this free stuff incredibly hard, yet not many people seem to appreciate it. It's like Jeffrey wants you to use command bus with Vue for the landing page because you've been a naughty boy. It's all just tools and recommendations. Pick those you like and feel right for the job and have the rest in the back of your head if you need them later. Why can't this message, which Jeffrey mentions in most of the videos, seems never to be heard? 14th March, 2017 HRcc left a reply on Why Laravel Dropped Elixir? Why Forcing Vue.js? What Is It That You Are Trying To Fix? • 2 months ago Woah, this thread blows my mind. Anyways, here is something for you: Matt handles the immense task of removing Vue like a pro. 6th October, 2016 HRcc left a reply on About Vue 2.0 • 7 months ago I've converted two larger apps to Vue 2.0 in the past week and the updated codebase just looks so much cleaner, especially when it comes to Vuex and Vue-router related stuff. Although I liked filters, sync, dispatch&broadcast, it didn't take too much effort to get used to the new Vue 2.0 way of doing things. 5th October, 2016 HRcc left a reply on Dry Crud Controller! • 7 months ago From my experience, you can create an abstract "god" controller which handles validation, storing, responses and it works in specific cases. However, I rarely find it useful and prefer readability much more. In addition, it generally tends to get really messy when you need to add filtering, queued jobs, route model binding, selective caching and all that good stuff you often use. 8th August, 2016 HRcc left a reply on Good Example Of Using Command Pipeline? • 9 months ago I've recently used it for Eloquent query filtering/sorting - each stage of pipeline checks presence of some value on the request and adjusts the query builder accordingly. It's quite pleasant to work with, since you can test the stages independently or easily add/remove them. Another usecase I had was very similar. I needed to do some extra logic after PATCH request and I used this pipeline structure to pass an empty array with the request through multiple stages which built my array of updates used in $model->update($updates). 27th June, 2016 HRcc left a reply on Whats The Best Role/Permission System For Laravel 5.1/5.2 • 10 months ago You can have a look at laravel-permission. Lately, I've been using multiple of Spatie's packages and they are very pleasant to work with. 8th June, 2016 HRcc left a reply on Mailgun & Vue? • 11 months ago 19th May, 2016 HRcc left a reply on Anyone Using Laravel Echo? • 1 year ago Haven't tried Echo yet, but Pusher needs you to handle auth in your API. Once you do that, you should be good. 15th March, 2016 HRcc left a reply on Authentication In Lumen • 1 year ago 6th March, 2016 HRcc left a reply on Seek The Best Solution For Role Based Access Control • 1 year ago I am more than happy with 4th March, 2016 HRcc left a reply on Spark Will Not Be Free • 1 year ago 29th February, 2016 HRcc left a reply on Using Socket.io In Production Environment With Laravel Projects • 1 year ago You can use laravel-cors to fix that error. 28th February, 2016 HRcc left a reply on Webpack, What Is It? • 1 year ago You will never be able to truly master everything™. It's more important to know different approaches so that you are able to pick the right(ish) one for the problem you're dealing with. You can always focus on specifics when you need it. HRcc left a reply on Webpack, What Is It? • 1 year ago It's similar to Browserify -> (not only) Javascript bundler. It seems to be favoured in some communities (Vue, React,...) because of cool dev features like hot loading and lazy loading. It's definitely worth learning. 20th February, 2016 HRcc left a reply on Calculate Age From Date Stored In Database In Y-m-d • 1 year ago You should be able to use Carbon's diffInYears to calculate age easily. Similarly, use Carbon to reformat the date for view. The logic could be placed in model directly (maybe as age getter) or presenter. I would avoid putting this to controller. 17th January, 2016 HRcc left a reply on Is There A Way I Can Support Both Guzzle 5 And 6 In My Laravel Package? • 1 year ago I see that you have a HttpClientInterface which is implemented in your adapter. It shouldn't be difficult to write another implementation for Guzzle 6 and let users decide which one they want to use in the HttpClient trait. Moreover, when it comes to Laravel, you can provide config allowing for easy http client selection (and you would just bind the chosen implementation in the service provider directly) 6th January, 2016 HRcc left a reply on Lumen Or Laravel? • 1 year ago Just go with Laravel. Lumen is cool, but you probably don't want to build your whole app around it unless it's an API. HRcc left a reply on Looking For Potential Book Reviewers • 1 year ago I'd be happy to review that. Really like the idea of those illustrations :) HRcc left a reply on What Is The Difference Between Integration Test & Acceptance Test • 1 year ago Generally I'd call integration test anything which combines multiple 'units' of code. And acceptance tests are those, coming from the outside in, written in a way of user directly executing the steps and observing the results. For example with that registration example of yours: You might have some form of RegistrationService, RegisterUserCommand or something similar, which takes input from the request and does multiple steps to register the user. You could test that if you provide correct input to this registration class, the user will be created in the database, they will be sent a welcome email, a new customer account will be created with stripe and you will get notification on slack. In this case you are interested in knowing if all these registration steps were successful, not how they were executed in detail. Acceptance test on the other hand could look like an user is registering on your page in his browser. He fills in the registration form, hits the submit button, sees the registration success message and receives the welcome email. 2nd January, 2016 HRcc left a reply on Laravel Anbu Profiler • 1 year ago I guess that this profiler is no longer maintained (last commit from Sep 2014). Laravel-Debugbar has always been the way to go for me. 10th December, 2015 HRcc left a reply on Nested Components • 1 year ago Sure, there's nothing wrong with that, however remember, that you should get the message into the alert somehow (either using props or transclusion). Moreover you would probably want to register a global component for alert (and not subcomponent for contact-form) since you'll probably use it elsewhere as well. 7th November, 2015 HRcc left a reply on Async Requests With Authentication • 1 year ago You can use JWT or other token auth solutions. This way you will send credentials only once and use token with each request instead. 25th October, 2015 HRcc left a reply on L5.2 Block User From Editing Another User • 1 year ago You don't need the else block at all + have a look at ACL series 16th October, 2015 HRcc left a reply on Start Values For Migrations • 1 year ago Why don't you use seeders? 12th October, 2015 HRcc left a reply on Lost In Testing • 1 year ago Yes, I would do it that way. Maybe you can get away with VM, spinning up a low cost virtual server or using your other personal server for this. It depends on your needs and character of your app. 11th October, 2015 HRcc left a reply on L5.1: How To Share A Collection In All Views? • 1 year ago ... or View Composers if you keep that in a sidebar or footer partial. 10th October, 2015 HRcc left a reply on Lost In Testing • 1 year ago Don't try to unit test your controllers or models. You can use integration approach for those. The implementation depends on your requirements and app logic, however these tests are more useful for controllers and models, since they mimic the real world behaviour much closer that mocked tests. HRcc left a reply on Why Laravel 5.1 Don't Have Bootstrap Included • 1 year ago NPM / Bower / download it from the website and throw files into public folder... HRcc left a reply on CSRF With API • 1 year ago You don't need to. Exclude your API from CSRF checks. If you are using JWT or OAuth, you're not dealing with cookies, which makes CSRF protection not necessary. 1st October, 2015 HRcc left a reply on Entities Tightly Coupled With Eloquent • 1 year ago I'd recommend that you either embrace Eloquent (Active Record) or replace it with something else (Data Mapper/...). Regarding testability, I've written tons of tests for complex Eloquent models interaction and it never bothered me in the slightest. Moreover it gives you a nice way to test your API - naming/returned objects/... Although Eloquent can feel kinda dirty, if you try to follow some 'recommendations', it is not and it truly is a quick and natural way of interaction with DB. 24th September, 2015 HRcc left a reply on How Can We Create Frontend And Backend Login ?? • 1 year ago I generally prefer to build admin part separately. You can always share models/services using custom packages, but using this approach you can focus on each part to do what it's supposed to do. Moreover broken commit related to reporting won't kill your landing page etc. 14th September, 2015 HRcc left a reply on Form Request Like Uploading • 1 year ago This really looks like a neat solution :) Generally I tend to use queued jobs for this task, since there is often some additional processing or S3 upload involved. However firing an event at the end of upload method could solve this. 12th September, 2015 HRcc left a reply on Laravel 5.1 ACL Help • 1 year ago @xtremer360 check 18:08 11th September, 2015 HRcc left a reply on Laravel 5.1.11 Brings Us Authorization! (User Permissions / Access Control) • 1 year ago @kocoten1992 I solved this exact thing today... by uninstalling Entrust :) The switch was very quick and painless. HRcc left a reply on Question: SPA Structure. Starting With Angular With Laravel • 1 year ago Firstly, if you want to build SPA with Angular + Laravel, then your best bet is to build them independently. Use Laravel to form the API and consume the API with your Angular app. This way you can switch Angular at any point or use native mobile apps as another mean of consuming your API. As for CSRF, remove the CSRF middleware and you're good. Otherwise you'll have to serve your Angular from a Laravel view to get that CSRF token, send it with each request,... Just use JWT (have a look at (satellizer)[]) and you don't have to bother with CSRF, mobile issues or sessions. HRcc left a reply on Laravel Policies With Cartalyst Sentinel • 1 year ago Since it works for you, it's good. However, you could create custom Gate implementation which plays nicely with Sentinel and swap it with the original. I'd consider that a cleaner solution instead of using two auth systems at the same time. 4th September, 2015 HRcc left a reply on Laravel Spark • 1 year ago @MattCroft Frontend should be enhanced by Vue if I remember correctly, although many views are probably just basic Laravel views. Regarding configuration, I believe that you expect too much. It should help you to start quickly without the need to build the stuff included, but everything else is for you to create. If you don't want to deploy after adding subscription, then create an admin layer around it and manage it there. There is no way that Taylor would be able to implement all specific implementations, that people might want, he basically provides an interface to work with Spark = wrapper around 'core' parts of SAAS. 30th August, 2015 HRcc left a reply on Laravel 5.1.11 Brings Us Authorization! (User Roles / Access Control) • 1 year ago 27th August, 2015 HRcc left a reply on Project Scale • 1 year ago You can always create custom packages for search/payments/dashboard/... and pull them through composer (pointing to private or public repository) into all those application, where they're needed. This way you can avoid code duplication quite efficiently. 10th August, 2015 HRcc left a reply on Returning Data From A Job? • 1 year ago Self handling job uses constructor to assign properties and handle method to do the stuff. You just want to return data from handle method if I'm not mistaken. Why would you then bother with command bus and command pattern, since basically you just need a simple class "looking" as a job/command, which does exactly that. Moreover, you can turn it around and request dependencies in constructor and pass parameters to handle (or whatever method) to process it. This will get you this: public function store(Request $request, PostMessage $postMessageCommand) { if (!$this->messageValidator->isValid($request->all())) { return redirect()->back()->withInput()->withErrors($this->messageValidator->getErrors()); } $input = $request; $input['user'] = Auth::user(); $messageId = $postMessageCommand->handle($input); return redirect('messages'); } You will have your logic in one class as requested and you can create multiple classes like this, or join them together to MessagingService or something similar. 6th August, 2015 HRcc left a reply on Speaking Of Pushing To Clients: New Lesson Notifications? • 1 year ago Or how about slack notifications? :) 29th July, 2015 HRcc left a reply on Encrypting Model Data • 1 year ago Eh, my bad, switched it for the correct example. Thanks for pointing that out. HRcc left a reply on Encrypting Model Data • 1 year ago @ShaunL No, it won't be the same, since there is a presence of randomized instruction vector in the Encrypter class. You can easily test that in tinker calling bcrypt('secret stuff') twice or more times. 24th July, 2015 HRcc started a new conversation Real-time Threads View • 1 year ago Sometimes I don't close tabs with specific threads, because I'm waiting for a response or just another entry. The way it works now is that you receive the notification about the newly added post, then you have to reload the page yourself to view it. How about utilizing this push message in the way, that it would run AJAX request to grab the new post and append it to the list (maybe with a label showing that it's new). It would definitely be a good topic for a video as well :) What do you think about it, @JeffreyWay ? HRcc left a reply on Change Laracasts Colour Back To Laravel Colour • 1 year ago I had to turn off my f.lux to check if it was real :) But to be honest, I'm not a big fan of this change. Maybe I'm just old fashioned, but red submit button feels wrong. Although I really like the red color, the amount of it feels too much. It looks like there are some sort of errors around the page and the the red text itself is a bit too dark on both white and dark backgrounds. HRcc left a reply on Can't Laracast Be Free? :( • 1 year ago I guess that the point is, that products in our industry are mostly immaterial assets. That leads to situations when some people think that there is no "real" value behind it, compared to bread, book, table, car,... If anybody asks for a free car, you'll consider him a fool, but asking for a free screencasts is no different in my mind. Some people make them for free, which is great, but the quality may vary. Here you are paying $9/month for high quality screencasts which help you solve most of the problems you'll have to deal with. It's basically free for the value it provides since I'm sure that you would spend much more time browsing SO or google to get the information provided in any short video here - Laracasts really pay for itself. (... and I come from a country, where the average hourly wage is around $3-4 with the price of coffee around $2) 21st July, 2015 HRcc left a reply on $fractal Versus Protected $casts() • 1 year ago Sure, there is a lot of stuff fractal offers you - data includes, powerful pagination, easy edits of your column names without changes in your api,... But if you only need casting, you probably don't need fractal at all. 15th July, 2015 HRcc left a reply on Why Use Service Providers? Any Examples? • 1 year ago Want to change your profile photo? We pull from gravatar.com.
https://laracasts.com/@HRcc
CC-MAIN-2017-22
en
refinedweb
Proposed features/Former stations Currently there are a number of 'unofficial' tags used for former stations, in varying states of use, disuse or complete ruin. Although railway=station disused=yes can be used for some of these, some tools that do not explicity check for the disused tag may confuse these with active stations. Therefore it is suggested that : either - A new namespaced railway:historic=station_site - (not preferable -see comments) - A new value is created for the existing historic=* being historic=station_site or historic=railway tag is created to allow for the tagging of former stations sites, be they mothballed, disused, abandoned, or obliterated. In addition to the main tag It is suggested the following addtional params could be used : Please move status and dates into a "former stations" namespace unless they're going to be more generally applicable (in which case, propose and document them separately). Probably this entire mess should be reexpressed as historic:railway, historic:operator, historic:railway:status etc. so that it's "under" historic=* (which describes previous usage) rather than railway=* (which describes current status and usage). --achadwick 11:20, 19 May 2009 (UTC) The argument to make this (some tools don't use the already approved attributes) is not good enough IMO. It's the tools that should change. Not the meaning of our tagging. EsbenDamgaard 11:10, 23 June 2009 (UTC)
http://wiki.openstreetmap.org/wiki/Proposed_features/Former_stations
CC-MAIN-2017-22
en
refinedweb
Shop Safely with PayPal All electrical products ship in 230 V / 50 Hz All prices are in Australian Currency - $AUD Water Distillers Genie Standard Water Distillers PayPal Genie Standard Countertop Water Distillers 2 Star Recommendation Discontinued No stock left as of November 2014 Orders are usually dispatched (from Australia) the same day - or the very next working day! Genie Standard Mk2-S - (white powder coated outer casing) - view Semi automatic water distiller - produces 17 litres / 24 hours Includes complimentary 3.9 litre poly carbonate Collector Bottle Includes packet of 6 coconut charcoal filters Entry level water distiller Includes insured delivery IMPORT DUTIES : It is the sole responsibility of the customer to pay for any applicable import duties / taxes when importing JUICERS AUSTRALIA © products into their country. ~ Genie Standard Mk2 Water Distillers ~ makes ultra pure distilled water
http://www.juicersaustralia.com.au/PayPal/PayPal-Genie-Standard-Countertop-Water-Distiller.shtml
CC-MAIN-2017-22
en
refinedweb
HiI went to trip for last three days and I returned today.I suprised this loooong thread now :)I have one comment.> TOTALLY UNTESTED. As usual. But the concept is pretty simple, and it > actually removes a fair chunk of hacky code. The only reason the diffstat > output says that it adds more lines than it deletes is that I added more > comments and made that helper inline function rather than make a complex > conditional.> > Whaddaya think?> > Linus> > ---> mm/mmap.c | 48 +++++++++++++++++++++++++-----------------------> mm/shmem.c | 2 +-> 2 files changed, 26 insertions(+), 24 deletions(-)> > diff --git a/mm/mmap.c b/mm/mmap.c> index c581df1..5fcaec3 100644> --- a/mm/mmap.c> +++ b/mm/mmap.c> @@ -1090,6 +1090,15 @@ int vma_wants_writenotify(struct vm_area_struct *vma)> mapping_cap_account_dirty(vma->vm_file->f_mapping);> }> > +/*> + * We account for memory if it's a private writeable mapping,> + * and VM_NORESERVE wasn't set.> + */> +static inline int private_accountable_mapping(unsigned int vm_flags)> +{> + return (vm_flags & (VM_NORESERVE | VM_SHARED | VM_WRITE)) == VM_WRITE;> +}> +> unsigned long mmap_region(struct file *file, unsigned long addr,> unsigned long len, unsigned long flags,> unsigned int vm_flags, unsigned long pgoff,> @@ -1117,23 +1126,24 @@ munmap_back:> if (!may_expand_vm(mm, len >> PAGE_SHIFT))> return -ENOMEM;> > - if (flags & MAP_NORESERVE)> + /*> + * Set 'VM_NORESERVE' if we should not account for the> + * memory use of this mapping. We only honor MAP_NORESERVE> + * if we're allowed to overcommit memory.> + */> + if ((flags & MAP_NORESERVE) && sysctl_overcommit_memory != OVERCOMMIT_NEVER)I afraid this line a bit.if following scenario happend, we can lost VM_NORESERVE?1. admin set overcommit_memory to "never"2. mmap3. admin set overcommit_memory to "guess"thanks.
https://lkml.org/lkml/2009/2/2/81
CC-MAIN-2017-22
en
refinedweb
The elegant key to its functionality is the subclassing of the System.Web.Mvc.RazorViewEngine, overriding the "FindView" method (Figure 1), and deciding on which View to serve up based on the browser type. This prevents having to change any code in your controllers, keeping things clean. Figure 1 using System.Web.Mvc; namespace Web { class MobileViewEngine : RazorViewEngine { public override ViewEngineResult FindView(ControllerContext controllerContext, string viewName, string masterName, bool useCache) { ViewEngineResult result = null; var request = controllerContext.HttpContext.Request; //Determine if this is a mobile browser //IsSupportedMobileDevice is an extension method which queries the UserAgent if (request.IsSupportedMobileDevice() && ApplicationHelper.HasMobileSpecificViews) { //Get mobile view string viewPathAndName = ApplicationHelper.MobileViewsDirectoryName + viewName; result = base.FindView(controllerContext, viewPathAndName, masterName, true); if (result == null || result.View == null) { result = base.FindView(controllerContext, viewPathAndName, masterName, false); } } else { result = base.FindView(controllerContext, viewName, masterName, useCache); } return result; } } }To create your mobile views, you'll simply need to create a folder called "Mobile" (or anything else, as it's configurable) under the existing Views folder, like so: It's all downhill from there. I made a couple changes and additions to the UserAgent check, but aside from that, this template will have you up and running with an MVC 3 mobile app in no time. To get it from Visual Studio: 1. Open extension manager from Tools/Extension Manager 2. Goto Online Gallery, and search for "Mobile Ready HTML5" 3. Click Download. To get it from a browser: Go here:. Thanks to Sammy Ageil for putting this together. He's got some more details and instructions on his site:.
http://tekprolixity.blogspot.com/2012/04/instant-mvc-3-mobile-app.html
CC-MAIN-2017-22
en
refinedweb
Build a Hardware-based Face Recognition System for $150 with the Nvidia Jetson Nano and Python Using Python 3.6, OpenCV, Dlib and the face_recognition module With the Nvidia Jetson Nano, you can build stand-alone hardware systems that run GPU-accelerated deep learning models on a tiny budget. It’s just like a Raspberry Pi, but a lot faster. To get you inspired, let’s build a real hardware project with a Jetson Nano.. What is the Nvidia Jetson Nano and how is it different than a Raspberry Pi? For years, Raspberry Pi has been the easiest way for a software developer to get a taste of building their own hardware devices. The Raspberry Pi is a $35 computer-on-a-board that runs Linux and fully supports Python. And if you plug in a $20 Raspberry Pi camera module, you can use it to build stand-alone computer vision systems. It was a game-changing product that sold over 12 million units in the first five years alone and exposed a new generation of software developers to the world of hardware development. While the Raspberry Pi is an amazing product, it’s painful to use for deep learning applications. The Raspberry Pi doesn’t have a GPU and its CPU isn’t especially fast at matrix math, so deep learning models usually run very slowly. It just isn’t what the Raspberry Pi was designed to do. Lots of computer vision developers tried to use it anyway but they usually ended up with applications that ran at less than one frame of video a second. Nvidia noticed this gap in the market and built the Jetson Nano. The Jetson Nano is a Raspberry Pi-style hardware device that has an embedded GPU and is specifically designed to run deep learning models efficiently. The other really cool part is that the Jetson Nano supports the exact same CUDA libraries for acceleration that almost every Python-based deep learning framework already uses. This means that you can take an existing Python-based deep learning app and often get it running on the Jetson Nano with minimal modifications and still get decent performance. It’s a huge step up from the Raspberry Pi for deep learning projects. What to Buy With any hardware project, the first step is to buy all the parts that you’ll need to build the system. Here are the minimal pieces that you’ll need to buy: 1. Nvidia Jetson Nano board ($99 USD) These are currently hard to get and regularly out of stock. Please watch out for scammers and try to buy from an official source to avoid getting scammed. You can often find them in stock direct from Nvidia. Full disclosure: I got my Jetson Nano board for free from a contact at Nvidia (they were sold out everywhere else) but I have no financial or editorial relationship with Nvidia. 2. MicroUSB power plug (~$10 USD) Look for a power adapter that specifically says it supports the Jetson Nano if possible as some USB plugs can’t put out enough power. But an old cell phone charger might work. 3. Raspberry Pi Camera Module v2.x (~$30 USD) You can’t use a Raspberry Pi v1.x camera module! The chipset is not supported by the Jetson Nano. It has to be a v2.x camera module to work. 3. A fast microSD card with at least 32GB of space (~$10-$25 USD) I got a 128GB card for a few dollars more on Amazon. I recommend going larger so don’t run out of space. If you already have an extra MicroSD card sitting around it, feel free to re-use it. 4. There are also a few other things that you will need but you might already have them sitting around: - A microSD card reader for your computer so that you can download and install the Jetson software - A wired USB keyboard and a wired USB mouse to control the Jetson Nano - Any monitor or TV that accepts HDMI directly (not via an HDMI-to-DVI converter) so you can see what you are doing. You must use a monitor for the initial Jetson Nano setup even if you run without a monitor later. - An ethernet cable and somewhere to plug it in. The Jetson Nano bizarrely does not have wifi built-in. You can optionally add a USB wifi adapter, but support is limited to certain models so check before buying one. Get all that stuff together and you are ready to go! Hopefully, you can get everything for less than $150. The main costs are the Jetson Nano board itself and the camera module. Of course, you might want to buy or build a case to house the Jetson Nano hardware and hold the camera in place. But that entirely depends on where you want to deploy your system. Downloading the Jetson Nano Software Before you start plugging things into the Jetson Nano, you need to download the software image for the Jetson Nano. Nvidia’s default software image is great! It includes Ubuntu Linux 18.04 with Python 3.6 and OpenCV pre-installed which saves a lot of time. Here’s how to get the Jetson Nano software onto your SD card: - Download the Jetson Nano Developer Kit SD Card Image from Nvidia. - Download Etcher, the program that writes the Jetson software image to your SD card. - Run Etcher and use it to write the Jetson Nano Developer Kit SD Card Image that you downloaded to your SD card. This takes about 20 minutes or so. At this point, you have an SD card loaded with the default Jetson Nano software. Time to unbox the rest of the hardware! Plugging Everything In First, take your Jetson Nano out of the box: All that is inside is a Jetson Nano board and a little paper tray that you can use to prop up the board. There’s no manual or cords or anything else inside. The first step is inserting the microSD card. However, the SD card slot is incredibly well hidden. You can find it on the rear side under the bottom of the heatsink: Next, you need to plug in your Raspberry Pi v2.x camera module. It connects with a ribbon cable. Find the ribbon cable slot on the Jetson, pop up the connector, insert the cable, and pop it back closed. Make sure the metal contacts on the ribbon cable are facing inwards toward the heatsink: Now, plug in everything else: - Plug in a mouse and keyboard to the USB ports. - Plug in a monitor using an HDMI cable. - Plug in an ethernet cable to the network port and make sure the other end is plugged into your router. - Finally, plug in the MicroUSB power cord. You’ll end up with something that looks like this: The Jetson Nano will automatically boot up when you plug in the power cable. You should see a Linux setup screen appear on your monitor. First Boot and User Account Configuration The first time the Jetson Nano boots, you have to go through the standard Ubuntu Linux new user process. You select the type of keyboard you are using, create a user account and pick a password. When you are done, you’ll see a blank Ubuntu Linux desktop. At this point, Python 3.6 and OpenCV are already installed. You can open up a terminal window and start running Python programs right now just like on any other computer. But there are a few more libraries that we need to install before we can run our doorbell camera app. Installing Required Python Libraries To build our face recognition system, we need to install several Python libraries. While the Jetson Nano has a lot of great stuff pre-installed, there are some odd omissions. For example, OpenCV is installed with Python bindings, but pip and numpy aren’t installed and those are required to do anything with OpenCV. Let’s fix that. From the Jetson Nano desktop, open up a Terminal window and run the following commands. Any time it asks for your password, type in the same password that you entered when you created your user account: sudo apt-get updatesudo apt-get install python3-pip cmake libopenblas-dev liblapack-dev libjpeg-dev First, we are updating apt, which is the standard Linux software installation tool that we’ll use to install everything else. Next, we are installing some basic libraries with apt that we will need later to compile numpy and dlib. Before we go any further, we need to create a swapfile. The Jetson Nano only has 4GB of RAM which won’t be enough to compile dlib. To work around this, we’ll set up a swapfile which lets us use disk space as extra RAM. Luckily, there is an easy way to set up a swapfile on the Jetson Nano. Just run these two commands: git clone Note: This shortcut is thanks to the JetsonHacks website. They are great! At this point, you need to reboot the system to make sure the swapfile is running. If you skip this, the next step will fail. You can reboot from the menu at the top right of the desktop. When you are logged back in, open up a fresh Terminal window and we can continue. First, let’s install numpy, a Python library that is used for matrix math calculations: pip3 install numpy This command will take 15 minutes since it has to compile numpy from scratch. Just wait until it finishes and don’t get worried it seems to freeze for a while. Now we are ready to install dlib, a deep learning library created by Davis King that does the heavy lifting for the face_recognition library. However, there is currently a bug in Nvidia’s own CUDA libraries for the Jetson Nano that keeps it from working correctly. To work around the bug, we’ll have to download dlib, edit a line of code, and re-compile it. But don’t worry, it’s no big deal. In Terminal, run these commands: wget tar jxvf dlib-19.17.tar.bz2cd dlib-19.17 That will download and uncompress the source code for dlib. Before we compile it, we need to comment out a line. Run this command: gedit dlib/cuda/cudnn_dlibapi.cpp This will open up the file that we need to edit in a text editor. Search the file for the following line of code (which should be line 854): forward_algo = forward_best_algo; And comment it out by adding two slashes in front of it, so it looks like this: //forward_algo = forward_best_algo; Now save the file, close the editor, and go back to the Terminal window. Next, run these commands to compile and install dlib: sudo python3 setup.py install This will take around 30–60 minutes to finish and your Jetson Nano might get hot, but just let it run. Finally, we need to install the face_recognition Python library. Do that with this command: sudo pip3 install face_recognition Now your Jetson Nano is ready to do face recognition with full CUDA GPU acceleration. On to the fun part! Running the Face Recognition Doorbell Camera Demo App The face_recognition library is a Python library I wrote that makes it super simple to do face recognition. It lets you detect faces, turn each detected face into a unique face encoding that represents the face, and then compare face encodings to see if they are likely the same person — all with just a couple of lines of code. Using that library, I put together a doorbell camera application that can recognize people who walk up to your front door and track each time the person comes back. Here’s it looks like when you run it: To get started, let’s download the code. I’ve posted the full code here with comments, but here’s an easier way to download it onto your Jetson Nano from the command line: wget -O doorcam.py tiny.cc/doorcam Then you can run the code and try it out: python3 doorcam.py You’ll see a video window pop up on your desktop. Whenever a new person steps in front of the camera, it will register their face and start tracking how long they have been near your door. If the same person leaves and comes back more than 5 minutes later, it will register a new visit and track them again. You can hit ‘q’ on your keyboard at any time to exit. The app will automatically save information about everyone it sees to a file called known_faces.dat. When you run the program again, it will use that data to remember previous visitors. If you want to clear out the list of known faces, just quit the program and delete that file. Doorbell Camera Python Code Walkthrough Want to know how the code works? Let’s step through it. The code starts off by importing the libraries we are going to be using. The most important ones are OpenCV (called cv2 in Python), which we’ll use to read images from the camera, and face_recognition, which we’ll use to detect and compare faces. import face_recognition import cv2 from datetime import datetime, timedelta import numpy as np import platform import pickle Next, we are going to create some variables to store data about the people who walk in front of our camera. These variables will act as a simple database of known visitors. known_face_encodings = [] known_face_metadata = [] This application is just a demo, so we are storing our known faces in a normal Python list. In a real-world application that deals with more faces, you might want to use a real database instead, but I wanted to keep this demo simple. Next, we have a function to save and load the known face data. Here’s the save function: def save_known_faces(): with open("known_faces.dat", "wb") as face_data_file: face_data = [known_face_encodings, known_face_metadata] pickle.dump(face_data, face_data_file) print("Known faces backed up to disk.") This writes the known faces to disk using Python’s built-in pickle functionality. The data is loaded back the same way, but I didn’t show that here. I wanted this program to run on a desktop computer or on a Jetson Nano without any changes, so I added a simple function to detect which platform it is currently running on: def running_on_jetson_nano(): return platform.machine() == "aarch64" This is needed because the way we access the camera is different on each platform. On a laptop, we can just pass in a camera number to OpenCV and it will pull images from the camera. But on the Jetson Nano, we have to use gstreamer to stream images from the camera which requires some custom code. By being able to detect the current platform, we’ll be able to use the correct method of accessing the camera on each platform. That’s the only customization needed to make this program run on the Jetson Nano instead of a normal computer! Whenever our program detects a new face, we’ll call a function to add it to our known face database: def register_new_face(face_encoding, face_image): known_face_encodings.append(face_encoding) known_face_metadata.append({ "first_seen": datetime.now(), "first_seen_this_interaction": datetime.now(), "last_seen": datetime.now(), "seen_count": 1, "seen_frames": 1, "face_image": face_image, }) First, we are storing the face encoding that represents the face in a list. Then, we are storing a matching dictionary of data about the face in a second list. We’ll use this to track the time we first saw the person, how long they’ve been hanging around the camera recently, how many times they have visited our house, and a small image of their face. We also need a helper function to check if an unknown face is already in our face database or not: def lookup_known_face(face_encoding): metadata = None if len(known_face_encodings) == 0: return metadata face_distances = face_recognition.face_distance( known_face_encodings, face_encoding ) best_match_index = np.argmin(face_distances) if face_distances[best_match_index] < 0.65: metadata = known_face_metadata[best_match_index] metadata["last_seen"] = datetime.now() metadata["seen_frames"] += 1 if datetime.now() - metadata["first_seen_this_interaction"] > timedelta(minutes=5): metadata["first_seen_this_interaction"] = datetime.now() metadata["seen_count"] += 1 return metadata We are doing a few important things here: - Using the face_recogntion library, we check how similar the unknown face is to all previous visitors. The face_distance() function gives us a numerical measurement of similarity between the unknown face and all known faces— the smaller the number, the more similar the faces. - If the face is very similar to one of our known visitors, we assume they are a repeat visitor. In that case, we update their “last seen” time and increment the number of times we have seen them in a frame of video. - Finally, if this person has been seen in front of the camera in the last five minutes, we assume they are still here as part of the same visit. Otherwise, we assume that this is a new visit to our house, so we’ll reset the time stamp tracking their most recent visit. The rest of the program is the main loop — an endless loop where we fetch a frame of video, look for faces in the image, and process each face we see. It is the main heart of the program. Let’s check it out: def main_loop(): if running_on_jetson_nano(): video_capture = cv2.VideoCapture( get_jetson_gstreamer_source(), cv2.CAP_GSTREAMER ) else: video_capture = cv2.VideoCapture(0) The first step is to get access to the camera using whichever method is appropriate for our computer hardware. But whether we are running on a normal computer or a Jetson Nano, the video_capture object will let us grab frames of video from our computer’s camera. So let’s start grabbing frames of video: while True: # Grab a single frame of video ret, frame = video_capture.read() # Resize frame of video to 1/4 size small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25) # Convert the image from BGR color rgb_small_frame = small_frame[:, :, ::-1] Each time we grab a frame of video, we’ll also shrink it to 1/4 size. This will make the face recognition process run faster at the expense of only detecting larger faces in the image. But since we are building a doorbell camera that only recognizes people near the camera, that shouldn’t be a problem. We also have to deal with the fact that OpenCV pulls images from the camera with each pixel stored as a Blue-Green-Red value instead of the standard order of Red-Green-Blue. Before we can run face recognition on the image, we need to convert the image format. Now we can detect all the faces in the image and convert each face into a face encoding. That only takes two lines of code: face_locations = face_recognition.face_locations(rgb_small_frame)face_encodings = face_recognition.face_encodings( rgb_small_frame, face_locations ) Next, we’ll loop through each detected face and decide if it is someone we have seen in the past or a brand new visitor: for face_location, face_encoding in zip( face_locations, face_encodings): metadata = lookup_known_face(face_encoding) if metadata is not None: time_at_door = datetime.now() - metadata['first_seen_this_interaction'] face_label = f"At door {int(time_at_door.total_seconds())}s" else: face_label = "New visitor!" # Grab the image of the the face top, right, bottom, left = face_location face_image = small_frame[top:bottom, left:right] face_image = cv2.resize(face_image, (150, 150)) # Add the new face to our known face data register_new_face(face_encoding, face_image) If we have seen the person before, we’ll retrieve the metadata we’ve stored about their previous visits. If not, we’ll add them to our face database and grab the picture of their face from the video image to add to our database. Now that we have found all the people and figured out their identities, we can loop over the detected faces again just to draw boxes around each face and add a label to each face: for (top, right, bottom, left), face_label in zip(face_locations, face_labels): # Scale back up face location # since the frame we detected in was 1/4 size top *= 4 right *= 4 bottom *= 4 left *= 4 # Draw a box around the face cv2.rectangle( frame, (left, top), (right, bottom), (0, 0, 255), 2 ) # Draw a label with a description below the face cv2.rectangle( frame, (left, bottom - 35), (right, bottom), (0, 0, 255), cv2.FILLED ) cv2.putText( frame, face_label, (left + 6, bottom - 6), cv2.FONT_HERSHEY_DUPLEX, 0.8, (255, 255, 255), 1 ) I also wanted a running list of recent visitors drawn across the top of the screen with the number of times they have visited your house: To draw that, we need to loop over all known faces and see which ones have been in front of the camera recently. For each recent visitor, we’ll draw their face image on the screen and draw a visit count: number_of_recent_visitors = 0for metadata in known_face_metadata: # If we have seen this person in the last minute if datetime.now() - metadata["last_seen"] < timedelta(seconds=10): # Draw the known face image x_position = number_of_recent_visitors * 150 frame[30:180, x_position:x_position + 150] = metadata["face_image"] number_of_recent_visitors += 1 # Label the image with how many times they have visited visits = metadata['seen_count'] visit_label = f"{visits} visits" if visits == 1: visit_label = "First visit" cv2.putText( frame, visit_label, (x_position + 10, 170), cv2.FONT_HERSHEY_DUPLEX, 0.6, (255, 255, 255), 1 ) Finally, we can display the current frame of video on the screen with all of our annotations drawn on top of it: cv2.imshow('Video', frame) And to make sure we don’t lose data if the program crashes, we’ll save our list of known faces to disk every 100 frames: if len(face_locations) > 0 and number_of_frames_since_save > 100: save_known_faces() number_of_faces_since_save = 0 else: number_of_faces_since_save += 1 And that’s it aside from a line or two of clean up code to turn off the camera when the program exits. The start-up code for the program is at the very bottom of the program: if __name__ == "__main__": load_known_faces() main_loop() All we are doing is loading the known faces (if any) and then starting the main loop that reads from the camera forever and displays the results on the screen. The whole program is only about 200 lines, but it does something pretty interesting — it detects visitors, identifies them and tracks every single time they have come back to your door. It’s a fun demo, but it could also be really creepy if you abuse it. Fun fact: This kind of face tracking code is running inside many street and bus station advertisements to track who is looking at ads and for how long. That might have sounded far fetched to you before, but you just build the same thing for $150! Extending the Program This program is an example of how you can use a small amount of Python 3 code running on a $100 Jetson Nano board to build a powerful system. If you wanted to turn this into a real doorbell camera system, you could add the ability for the system to send you a text message using Twilio whenever it detects a new person at the door instead of just showing it on your monitor. Or you might try replacing the simple in-memory face database with a real database. You can also try to warp this program into something entirely different. The pattern of reading a frame of video, looking for something in the image, and then taking an action is the basis of all kinds of computer vision systems. Try changing the code and see what you can come up with! How about making it play yourself custom theme music whenever you get home and walk up to your own door? You can check out some of the other face_recognition Python examples to see how you might do something like this. Learn More about the Nvidia Jetson Platform If you want to learn more about building stuff with the Nvidia Jetson hardware platform, there’s a website called JetsonHacks that publishes tips and tutorials. I recommend checking them out. I’ve found a few tips there myself. If you want to learn more about building ML and AI systems with Python in general, check out my other articles and my book on my website. If you liked this article, sign up for my Machine Learning is Fun! Newsletter to find out when I post something new: You can also follow me on Twitter at @ageitgey, email me directly or find me on linkedin.
https://medium.com/@ageitgey/build-a-hardware-based-face-recognition-system-for-150-with-the-nvidia-jetson-nano-and-python-a25cb8c891fd
CC-MAIN-2019-47
en
refinedweb
Embedded GUI Using Linux Frame Buffer Device with LittlevGL LittlevGL is a graphics library targeting microcontrollers with limited resources. However it possible to use it to create embedded GUIs with high-end microprocessors and boards running Linux operation system. The most well know processors cores are the ARM Cortex A9 (e.g. NXP i.MX6) and ARM Cortex A53 (e.g. Raspbery PI 3). You can create an embedded GUI on this single board computers by simply using Linux’s frame buffer device (typically /dev/fb0). If you don’t know LittlevGL yet learn more about it here: LittlevGL Why use the frame buffer directly? The frame buffer device is a very low-level interface to display something on the screen. Speaking about an embedded GUI there are several reasons to use the frame buffer directly instead of a Window manager: - simple Just write the pixels to a memory - fast No window manager which means fast boot and less overhead - portable Independently from the distribution every Linux system has a frame buffer device so it’s compatible with all of them Maybe you are familiar with the Linux frame buffer device. It is a file usually located at /dev/fb0. This file contains the pixel data of your display. If you write something into the frame buffer file then the changes will be shown on the display. If you are using Linux on your PC you can try it using a terminal: - Press Ctrl + Alt + F1 to leave the desktop and change to simple character terminal - Type sudo suand type your password - Stop your Display manager (on Ubuntu it’s lightdm): service lightdm stopImportant: it will log you out, so all windows will be closed - Write random data to the frame buffer device: cat /dev/urandom > /dev/fb0You should see random colored pixels on the whole screen. - To go back to the normal Graphical User Interface: service lightdm start It should work on Linux based single board computer too like: Get LittlevGL to create embedded GUI Now you know how to change the pixels on your displays. But you still need something which creates GUI elements instead of random pixels. Here comes the Littlev Graphics Library into the picture. This software library is designed to create GUI elements (like labels, buttons, charts, sliders, checkboxes etc.) on an embedded system’s display. Check all the widgets here: Graphical object types. The graphics library is written in C so you can surely adapt it in your project. The make your GUI impressive opacity, smooth animations, anti-aliasing and shadows can be added. To use LittlevGL you need to clone it from GitHub or get from the Download page. The following components will be required: - lvgl The core of the graphics library - lv_drivers Contains a Linux frame buffer driver - lv_examples Optionally to load a demo application to test GUI project set-up The most simple case to test the frame buffer device based GUI on your Linux PC. Later you apply the same code on an embedded device too. - Create a new project in your preferred IDE - Copy the template configuration files next to lvgl and lv_drivers folders: - lvgl/lv_conf_templ.h as lv_conf.h - lv_drivers/lv_drv_conf_templ.h as lv_drv_conf.h - In the config files remove the first and last #if and #endif to enable their content. - In lv_drv_conf.h set USE_FBDEV 1 - In lv_conf.h change the color depth: LV_COLOR_DEPTH 32 - Add the projects root folder as include path Create an embedded GUI application - In main.c write the following code to create a hello world label: #include "lvgl/lvgl.h" #include "lv_drivers/display/fbdev.h" #include <unistd.h> int main(void) { /*LittlevGL init*/ lv_init(); /*Linux frame buffer device init*/ fbdev_init(); /*A small buffer for LittlevGL to draw the screen's content*/ static lv_color_t buf[DISP_BUF_SIZE]; /*Initialize a descriptor for the buffer*/ static lv_disp_buf_t disp_buf; lv_disp_buf_init(&disp_buf, buf, NULL, DISP_BUF_SIZE); /*Initialize and register a display driver*/ lv_disp_drv_t disp_drv; lv_disp_drv_init(&disp_drv); disp_drv.buffer = &disp_buf; disp_drv.flush_cb = fbdev_flush; lv_disp_drv_register(&disp_drv); /*Create a "Hello world!" label*/ lv_obj_t * label = lv_label_create(lv_scr_act(), NULL); lv_label_set_text(label, "Hello world!"); lv_obj_align(label, NULL, LV_ALIGN_CENTER, 0, 0); /*Handle LitlevGL tasks (tickless mode)*/ while(1) { lv_tick_inc(5); lv_task_handler(); usleep(5000); } return 0; } - Compile the code and go back to character terminal mode (Ctrl + Alt + F1 and service lightdm stop) - Go to the built executable file and type: ./file_name - Test with a demo application by replace the Hello world label create with: demo_create(); Download a ready-to-use project In lv_linux_frame_buffer repository you find an Eclipse CDT project to try out the plain frame buffer based GUI with a Linux PC. There is a Makefile too to compile the project on your embedded hardware without an IDE. Summary I hope you liked this tutorial and found it useful for your microprocessor-based embedded Linux projects. As you can see it’s super easy is to create an embedded GUI with LittlevGL using only a plain Linux frame buffer. To learn more about the graphics library start to read the Documentation or check the Embedded GUI building blocks. If you don’t have a embedded hardware right now you can begin the GUI development on PC. If you have questions use GitHub issue tracker.
https://blog.littlevgl.com/2018-01-03/linux_fb
CC-MAIN-2019-47
en
refinedweb
Simpler Macros in Twig Templates Macros are one of the most important features of the Twig template language to avoid repetitive contents. In Twig 2.11, usage of macros was simplified and other features were added to help you work with macros. Automatic macro import¶ Contributed by Fabien Potencier in #3012. Macros are similar to PHP functions because you can pass arguments to them and the contents generated inside the macro are returned to include them in the place where the macro is called. On symfony.com we use macros for example to display the "contributor box" in several places to highlight our amazing Symfony code and docs contributors: Before calling to a macro in a template you must import it, even if the macro is defined in the same template. This behavior always felt confusing to some people and made using macros a bit annoying. Starting from Twig 2.11, we've fixed that and macros defined in the same template are imported automatically under the special variable _self: This automatic import also works for macros themselves, so you can call to a macro inside another macro of the same template without importing it explicitly (this also works for any macro imported globally in the template): Checking for macro existence¶ Contributed by Fabien Potencier in #3014. Another new feature introduced in Twig 2.11 is the support for checking the existence of macros before calling to them thanks to the is defined test. It works both for macros imported explicitly and for auto-imported macros: Macros Scoping¶ Contributed by Fabien Potencier in #3009. The scoping rules that define which macros are available inside each element of each template have been clearly defined as of Twig 2.11 as follows: - Imported macros are available in all blocks and other macros defined in the current template, but they are not available in included templates or child templates (you need to explicitly re-import macros in each template). - Macros imported inside a {% block %}tag are only defined inside that block and override other macros with the same name imported in the template. Same for macros imported explicitly inside a {% macro %}tag. Simpler Macros in Twig Templates symfony.com/blog/simpler-macros-in-twig-templatesTweet this __CERTIFICATION_MESSAGE__ Become a certified developer. Exams are taken online! It is great to see that my Symfony is on version 4.3.1 but I would also like to know that my Twig is on 2.11 and my Doctrine ORM is on 2.6.3 Wasn't it better to make some kind of autoload function where twig search for the macro file inside the template folder or an yaml setting where you specify the default macro folder ? You could look at for an approach to that :) To ensure that comments stay relevant, they are closed for old posts. Martin Aarhof said on Jun 21, 2019 at 10:24 #1 Will {% if _self.contributor_details is defined %} {# ... #} {% endif %} also work?
https://symfony.com/blog/simpler-macros-in-twig-templates?utm_source=Symfony%20Blog%20Feed&utm_medium=feed
CC-MAIN-2019-47
en
refinedweb
Ranter - 📌 cool, I do not really use vs code but I will check them out. - 📌 - h4xx3r16602y🚩 - Gitlens is the shit. I mostly just use web snippets and intellisense. Idk beautify and something to correctly paste from the clipboard. Pretty boring. 🤤 - I use vim - - - Cybork10242yAuto-Close Tag Auto-Rename Tag ES Lint GitLens highlight-matching-tag import cost ident-ranbow Prettier Rainbow Brackets React snippets Settings Sync stylelint Stylesheet Formatter Terminal theShukran React/React nNative Utils Waka Time I've downloaded some of yours to give them a try - DJLad971192yThanks for introducing me to GitLens, it's awesome! - rdunlap412y📌 - cfood62702y📌 - sallai2152y📌 - Cybork10242y@Devnergy If you using multiple computers to develop, I'd advise on Settings Sync. Waka Time is also a good tracker of whatever you're doing, repos you're working on and languages you're using, and it displays everything nicely in data visualizations. Besides these and GitLens, which you already work with, I recomend 'import cost' which I also foun d about recently - @Cybork thanks mate. I manually add my vscode settings in my macbook pro laptop and windows pc at work. It will definitely help me. Cheers! - - puttingUp(object, tool){ return "here to put up a " + object + " and" + tool + " shit down. Thanks <3"; }; puttingUp(⛺, ✍️); Related Rants - Company - About - News - Swag Store - Free Stickers - devDucks Share your VS Code installed extensions here. Mine is: Alignment, Better Comments, change-case, Colonize, CSS Peek, DotENV, File Utils, GitLens (my favorite!), Gulp Snippets, JS-CSS-HTML Formatter, Laravel 5 Snippets, Laravel Blade Snippets, Material Icon Theme, npm Intellisense, Numbered Bookmarks, Path Intellisense, PHP Debug, PHP DocBlocker, PHP Intelephense, PHP IntelliSense, Prettify JSON, Quokka.js, snippet-creator, Vetur. Feels like there are redundant extensions here that I need to uninstall. Happy Friday and Cheers! Excited for Infinity War movie! 😎 rant happy friday vscode extensions best free editor for me
https://devrant.com/rants/1352236/share-your-vs-code-installed-extensions-here-mine-is-alignment-better-comments-c
CC-MAIN-2019-47
en
refinedweb
Everyone wants to get their apps in front of the largest possible audience, and as mobile app stores are global, your app should be too. That makes localization into a vital marketing expense. New development tools make it easy to support multiple languages and cultures, and you’ll find that the hardest part is the language translation itself.. In this article, I’m going to build the same application on Android, iOS, and Windows Phone 8. To do this, I’ll use Xamarin’s Android and iOS tools (formerly known as MonoDroid and MonoTouch). Xamarin’s iOS and Android products use the native localization mechanisms on each platform. So although the iOS and Android information in this article has a definite .NET flavor, the core concepts also apply if you were doing Objective-C and Java development on each platform. Know Your Localization Problem What are the localization needs for your app? Have you been given any localization requirements? If you have, then you already have your marching orders. If you haven’t been given any requirements for localization, think about your target markets. There are global app stores and you don’t want to limit the marketability of your app if you don’t have to. How many languages do you need to support? If your app targets Canadian government employees, you need both English and French. If your app displays public transit information for cities in California, you want English and Spanish at minimum, Chinese and other languages if you can get translation resources. Know who your target audience is for your app. Don’t limit your app sales to a single market. The Basic Terminology There are some common terms used when you talk about localization. I’ll cover the basic ones here. Language: This is the language chosen by the user. The same language in different countries can have different spelling and grammar rules. Locale: The culture for the user. This is the language matched to the country. Locale can be used to differentiate between different dialects of a language. For example American English (en-US) and UK English (en-UK) have different spellings for the same word (such as color and colour) and also different terms for the same items (like hood versus bonnet). It also includes how dates, numbers, and currency are displayed. The locale is usually defined with a lower-case two-character language code and an uppercase two-character county code, separated by a hyphen. The language codes are defined by the ISO 639 standard, using the two-letter codes defined as ISO 639-1. The country codes are defined by the ISO 3166 standard, with ISO 3166-1 alpha-2 defining the two-letter country codes. A locale can be defined by only the language code, but it is more accurate to use language and country to account for regional differences. For example fr-CA represents the French language as it is used in Canada. This can be different than the French used in France with the locale code fr-FR. It’s mostly the same language, but with minor spelling differences and very different idioms. The language resource files used by the applications are selected based on the locale. This is handled by the runtime code; you don’t have to set this manually. Culture/UICulture: The .NET representation of a locale. For the most part, you don’t need to think about this part. The language resource should be selected by the app based on the locale. Right to Left (RTL) Support Do you need to support right-to-left (RTL) languages like Arabic, Hebrew, or Persian? This impacts how you layout out controls on the screen. The latest versions of mobile platforms have good support for RTL layout. If you’re writing an Android app, consider targeting Android 4.2 or later if you need RTL support. Full native support for RTL layout was added to 4.2. You can do RTL in older versions, but it’s much easier in 4.2. To add RTL support in Android 4.2, you’ll need to do the following: - Add the android:supportsRtl attribute to the <application> element in your manifest file and set it to true. This enables the RTL support in Android 4.2 (API level 17) and is ignored in older versions. - Convert “left” and “right” layout properties to “start” and “end.” For example, android:paddingLeft becomes android:paddingStart. RTL layouts do not have the same level of support in iOS. The UILabel control displays the text in RTL if the text starts with the Unicode character 0x200F. Unicode has two characters that are invisible and set the direction of the text. The 0x200F character indicates RTL and 0x200E indicates LTR. If you’re writing an iPhone app, the narrow width of the screen limits you to one text item per row. If you’re displaying multiple items horizontally, you need to detect the language and arrange the controls for the RTL layout. Windows Phone uses a property named FlowDirection to set RTL and LTR. This is set by the current Culture of the phone; you don’t need to do anything to support it. You can override the direction by adding a FlowDirection attribute to a control and setting it to LeftToRight or RightToLeft. Building the Localized Cross Platform The sample app is a basic note-taking app. It doesn’t do a lot, but it does enough to show language and culture localization on each platform. A complete solution containing projects for Android, iOS, and Windows Phone 8 can be downloaded from. Windows Phone 8 First I’m building the app on Windows Phone first because Microsoft has the best tools for writing and testing code; when you add the Xamarin tools, you can stay with a single language. Plus you gain a level of code reuse across the platforms. For localization, you have a secret weapon called the Multilingual App Toolkit. The Multilingual App Toolkit is a Visual Studio extension that handles the grunt work of adding language resource files. It can even use Microsoft Translator to machine translate some of the common language. For localization, you have a secret weapon called the Multilingual App Toolkit. The Multilingual App Toolkit is a Visual Studio extension that handles the grunt work of adding language resource files. You’ll also use the T4 feature in Visual Studio to generate the Android and iOS string resource files from the Windows string resource files. Creating the Windows Phone 8 App I’ll skip how to create a Windows Phone App because you can look at the sample solution or create an app of your own. What you need to do is to make sure that all of the text strings come from a resource file. The standard templates for a new Windows Phone app create the initial AppResources.resx file and wire it up for declarative binding in the XAML. Instead of having embedded text strings like this: <TextBlock Text="Hello World" /> You have something like this: <TextBlock Text="{Binding Path= LocalizedResources.HelloWorld, Source={StaticResource LocalizedStrings}}"/> If you need to set any text properties in the code-behind file, you just add the .NET resource syntax like this: appBarButton.Text = AppResources.Save Once you have the Windows Phone app working and with strings properly placed in the AppResources file, you’ll add some languages and do a rough translation of the text Create the Android App A sample Xamarin.Android app is included with this project (), and you can use that project or create one of your own. The code for using string resources is nearly identical between Xamarin.Android and Google’s Java-based development toolkit. The design of the app should be the Android equivalent of the Windows Phone app. The Android layout files are a rough approximation for the Windows Phone XAML views. Create the app, but don’t worry about localizing the string resources. You’ll want the resources that are generated from the Windows project. The reason for creating the project now is to create the project resource folder so that the tools have a destination folder for their output. Create the iOS App As with the Android app, a sample iPhone app that was created with Xamarin.iOS is included with this project at. If you use Objective-C, the resource files are the same but the source code is different. The UI for this app is code-based; it doesn’t use the .xib files generated by the Xcode Interface Builder tool. It takes a little more to localize an iOS app than it does for Windows or Android, but it’s pretty straight-forward and you’ll use a String Extension method to help out. As with the Android app, you don’t have to worry about localization yet. You just want to get the folders set up for the automated transforms. Install the Multilingual Toolkit Now it’s time to install the Multilingual App Toolkit for Visual Studio 2012. Follow these steps to get the Multilingual App Toolkit installed and usable with the sample project: - Make sure that Visual Studio has all of the latest service packs and critical updates installed. - Install the Multilingual App Toolkit from the language appropriate download link at. - Restart Visual Studio. - Open the solution containing the Windows Phone app and select the Windows Phone app project. - From the TOOLS menu, select “Enable Multilingual App Toolkit.” This enables the Multilingual App Toolkit to the project and adds a pseudo language (that you can ignore). - From the PROJECT menu, select “Add Translation Languages…” This invokes the Translation Languages dialog box, as seen in Figure 1. - Select French and Spanish and press the OK button. This adds the files AppResources.es.xlf and AppResources.fr.xlf to the project in the Resources folder. The xlf files are XLIFF (XML Localization Interchange File Format) files, which is standard XML format for storing localizable data that can be shared with external tools and third-party services. Double-click on the AppResources.fr.xlf file. This opens up the file in the Multilingual Editor, as displayed in Figure 2. Select all of the strings with the state of New (red icon) and click the Translate button. This does a machine translation of each string and is going to be a rough approximation of the translated text. You will get a translation, but translated without any context. A language expert should validate the translation to make sure it’s accurate. The Multilingual Editor sets the state of machine-translated text from “New” to “Needs Review.” This makes it easier for the language expert to know which items need to be reviewed. An alternative way of using auto translate is to right-click on the .xlf files that were just added and select “Generate Machine Translations.” The Multilingual App Toolkit uses the Microsoft Translator service to translate all of the new string resources in the files. Rebuild the project to generate the localized .resx files from the .xlf files. To test the localized string resources, you need to deploy the app to the Windows Phone Emulator and change the country and language. Windows Phone 8 does not differentiate between US Spanish and Spain’s Spanish. This is why, when you did the “es” locale, Windows Phone didn’t recognize “es-US” when you selected Spanish as the language and US as the country. It's a little cumbersome to test localization on Windows Phones because you have to run the Settings app on the Emulator (or device) and change the language (and region). This requires a reboot. There is a shortcut that that requires a couple of lines of code. Instead of having the app pick up the current language and region from the operating system, you can force a specific language. In the constructor method of the main page, add two lines to set CurrentCulture and CurrentUICulture to the locale you want to test. That code will look something like this: public MainPage() { InitializeComponent(); // Force the app to use a specific style Thread.CurrentThread.CurrentCulture = new CultureInfo("es-US"); Thread.CurrentThread.CurrentUICulture = Thread.CurrentThread.CurrentCulture; DataContext = App.ViewModel; BuildLocalizedApplicationBar(); } This can save a lot time during debugging, as opposed to changing the settings in the emulator each time, as that change reboots the emulator. Just remember to comment out or remove that before submitting the app. Otherwise, you’ll have locked every user into one language. One way around this is to put the code inside a #ifdef DEBUG/#endif block. When you compile the code for release, the debug code won’t be in the app. If you set the language in the code, remove that code before you submit the app to the app store. Using the sample Windows Phone project, the English page in English (US) appears in Figure 3. Changing the language to Spanish and the Region to Spain generates the Edit Notes page, as in Figure 4. You can generate the Android and iOS string resource files from the Windows RESX files, but first you need to add the Android and iOS versions of this app to the solution. Generate the Android and iOS Resource Files from the Windows Phone With the Android and iOS apps using string resource files, it’s time to transform the Windows string resource files into string resources for the other platforms. Install the T4 Tools A good programmer uses a tool for the grunt work. For this project, the tool is the T4 feature of Visual Studio, with a free extension from the Visual Studio Gallery called “T4 Toolbox.” T4, which stands for Text Template Transformation Toolkit, is a template-based text generation framework that comes with Visual Studio. It’s used by the Entity Framework to generate entities from database schema and by ASP.NET MVC to generate views and controllers. You’re going to generate language resource files from language .resx files. Among other things, T4 Toolbox makes it easier to generate multiple output files from a single source file. Install T4 Toolbox from within Visual Studio via the Extension Manager. From the main menu, select TOOLS, then Extension and Updates. Then select Visual Studio Gallery for T4 Toolbox and click the Download button, as shown in Figure 5. Add the T4 Scripts I wrote a T4 template that has the classes for transforming the Windows string resource files to the formats needed for Android and for iOS. This T4 file is named Resx2AndroidTemplate.tt and is included in Listing 1. This script defines two classes, Resx2AndroidTemplate and Resx2iOSTemplate. The bulk of the code is in Resx2AndroidTemplate; it loads in a specified .resx file and creates a Dictionary<string, string> list from the string resources in the .resx file. I’ve added properties that let you specify the output folder. With multiple platforms as separate projects within a single solution, being able to specify the paths allows the T4 code to update the .csproj projects files correctly. The TransformText() method takes the Dictionary of resource values and renders an XML file in the format of the Android string resource file. The following is an excerpt from the AppResources.resx resource file for the Windows Phone app: <?xml version="1.0" encoding="utf-8"?> <root> <data name="ApplicationTitle" xml: <value>Notes Demo</value> </data> <data name="Save" xml: <value>Save</value> </data> </root> The TransformText() method renders the following for Android: <?xml version="1.0" encoding="utf-8"?> <resources> <string name="ApplicationTitle"> Notes Demo</string> <string name="Save">Save</string> </resources> The Android and Windows string resources have a very similar format and it’s easy to generate Android from Windows. For iOS, the Resx2iOSTemplate class is based on the Resx2AndroidTemplate class. The TransformText() method renders a string resource file in the format that iOS compiler recognizes. Based on the same AppResources.resx snippet, the rendered for iOS looks like this: "ApplicationTitle"="Notes Demo"; "Save"="Save"; Another T4 script, Res2Others.tt in Listing 2, is the code that collects the resource files and runs the transformations on them. It executes in the Resources folder of the Windows Phone project and uses wild-card matching on the file name to process the .resx file for each language. This script has hard-coded paths to the resource folders for the Android and iOS apps, based on each app being a project in the same solution. This is easy to change to meet your needs. If you aren’t already using the Xamarin tools or the other projects in a different solution, create the folders outside the solution and let the T4 transform output to those folders. To run the transformation, right-click on Res2Others and select Run Custom Tool. The T4 engine runs the script and generates the transformed resource files. The code in Res2Others generates the platform folder names, based on the locale. Android uses specially named folders in the project Resources folder. The folder Values is the default folder and usually contains the English (en) string resources. If only the language is localized, the folder is named Values-xx where “xx” is the two-character language code. If a country code is also used, the name is Values-xx-rYY, with “xx” as the language. The “r” meaning regions and “YY” as the country or region. The “r” is an Android quirk, just having the country/region code should be enough to indicate that there is a region. But, as that is what Android requires, it needs to include that “r.” When Android needs to locate a string resource, it goes from most specific resource to most generic. For example, if you had Spanish language support and included both a generic Spanish resource and a US Spanish resource, it reads the resources in the following order: - Values-es-rUS - Values-es - Values The reason for this is so you don’t need to translate every string for every language. If your app has 90% of the terms as the same translation for Spain and for the US, you put the country-specific terms in the Values-es-rUS and Values-es-rES folders. In iOS, the folders are located off the root folder of the project and are named xx.lproj, where “xx” is the language. Apple does not support region-specific resource files for iOS projects. The first time that iOS resource files are generated, you’ll need to set the Build Action to Content, under the file properties tab. Localizing the Android App After generating the resource files for Android from the Windows Phone project, rebuild the Android project. This generates the symbols for the resource strings so that you can use them in the layout designer and in the code. With the layout files, it’s pretty easy. You set the android:text property to @string/ResourceStringName. It doesn’t matter how many string resource files you have or how they are named, Android references them all as @string. So a TextView control looks like this: <TextView android: You’ll be able to see the value of the NoteTitle resource at design time. You can select the displayed language in the designer to see the other languages. That’s a handy feature that would also be useful with the Windows Phone XAML designer. If you change the language and country in the emulator, remember to restart the app. Otherwise the settings may not be completely localized. If you change the language and country in the emulator, remember to restart the app or the settings may not be completely localized. To reference the resource string in code, use the GetString() method in C# (Xamarin) or in Java. You pass in the resource ID for the string, which is generated when you build the app. In C#, it looks like this: SomeButton.Text = Resources.GetString(Resource.String.SomeText); And in Java, it’s just slightly different: SomeButton.Text = Resources.GetString(R.String.SomeText); In addition to localizing the text in a layout file, you can have different layout files for different locales. As with string resource files, you can add localized layout folders by following the same naming conventions that were used for the string resource files. Localizing the iSO App With iOS, resources get compiled into what Apple calls bundles. To get a string resource, use the LocalizedString() method of the main bundle for the app. It looks something like this: myLabel.Text = NSBundle.MainBundle.LocalizedString("Save", "", ""); This returns the resource value for Save. If the resource does not exist, the value Save is returned. That’s a lot of code to use for every string resource to be assigned. With C#, you can make it much simpler with an extension method. Add this extension to your iOS project: using System; using MonoTouch.Foundation; namespace notes.iPhone { public static class LocalizationExtensions { public static string t(this string translate) { return NSBundle.MainBundle. LocalizedString(translate, "", ""); } } } With that extension, the code to set the Text property of the UILabel control becomes: myLabel.Text = "Save".t(); That extension was posted by Thomas Rosenstein on the Stack Overflow site. This is very handy for catching misspelled string resource keys. Unlike Android and Windows, iOS does not generate resource IDs; if you misspell the key string, you’ll get that back as the translated value. The second parameter of the LocalizedString method is the optional default value. If you do not pass in a value, you get the key string returned. If you set that to something that should not appear in app (like “!!TILT!!!”), it makes it easier to catch misspelled or missing resource keys. Other Concerns There are a few additional considerations to keep in mind. Dates, Numbers, and Currency There’s some additional work besides the language translation that you need to do. Part of the localization process is making sure that dates, times, and numbers are displayed correctly. Always use the .ToString() methods to display the values. If you use the .ToShortDateString() method for a DateTime variable, you’ll always get the right text for the locale. Currency is a different concern. You can’t automatically convert a currency value to the current locale. How you handle currency depends entirely on the needs of the application. Be careful how you localize the currency symbol. The value €10.00 does not have the same value as $10.00. Gender While translated terminology on mobile apps is usually terse, it’s something that you will need to watch out for. In English, the definite article “the” is gender neutral. Many languages, such as French and Spanish, assign a gender to a noun and their equivalent of “the” depends on the gender of that noun. For example, take the following two sentence fragments as they appear in English: - Press the button. - Edit the photo. In French, they could be translated as: - Pressez le bouton - Modifier la photo For these examples, you translate the entire sentence. If you’re creating the sentence at runtime and the noun is selected by the users, you need a way of determining the gender of the noun. Plural forms Handling plural forms can be tricky as the rules vary depending on the language. The most common pattern in English is the singular/plural rule. That’s represented like this: - You have 1 new message. - You have 4 new messages. In your code, you have two resource strings with a placeholder for the quantity; the string is selected by the quantity. Although many languages follow the singular/plural rule, it’s not universal. Asian languages (Chinese, Japanese, Korean, Vietnamese), only use the plural form. The Polish language has two forms (if the number is 1 or ends with 2 or 4-except 12 and 14-and everything else). To get an idea of the number of plural forms, see the list published on the Mozilla Developer site:. If you can avoid having to use plural forms, your code will be simpler. Instead of using the singular and plural forms for the number of emails, put the number at the end like this: - New messages: 1 That works for any quantity, including 0. Another reason to use this method is that it uses less screen real estate, which is always a premium on a mobile phone. Localization is More than Text If your app has images, consider whether there need to be localized versions. If you use an icon or an image that has context in your culture, use a generic version or culture-specific versions. The Stop Sign is often used as an icon to indicate a button to stop a process or an action. Many countries use a variation of the word “Stop” in a red octagon shape, as shown in Figure 6. Other countries use a local word and can even change the shape. Japan uses a triangle shape and the Japanese characters for “Stop” on their stop sign (Figure 7). When in doubt, use culture-specific icons. Avoid using the flag of a country as a symbol to represent the language being used. This can be viewed as offensive to countries with populations that speak different languages. French-speaking Canadians are particularly sensitive to this. This is less of an issue with mobile apps than with browser apps. I have seen many sites that use a flag as way of displaying or changing the language that it’s rendered with. Use the Right Resources for the Text Translation Although I used machine translation for this article, I wouldn’t put out an app without having a language expert review the translations. Machine translation gives you a rough approximation and has value for checking layout and making sure that the text has been moved to a resource file. If you’re fluent in multiple languages, you are the first source for translation for those languages. Remember, localization is a marketing expense. If you know the language well enough to translate it, it’s a cost savings for you. If you really want to reach a global audience, you’ll want to contract the translation work to a company that specializes in app localization. They can also translate the content that you submit to the app stores. Apple has a good list of resources for this work at. Make sure that localization vender can work with the resource file formats that you’re using. Use the Multilingual App Toolkit with its industry-standard XLIFF files. If you use third-party components in your code, make sure that they can be localized and that they respect the locale settings. Summary When designing a mobile application, you want to localize that application so that it reaches more users than an app built just for the default language. If you plan to support multiple platforms, you can get away with only having to have the text resources translated once. With the tools available for Visual Studio and some custom T4 scripting, you can translate once and get resource files for each platform.
https://www.codemag.com/Article/1401081
CC-MAIN-2019-47
en
refinedweb
How to install PyQt 4.3 and Python 2.5 on Windows Update 1/10/2008: PyQt has made the install process a lot simpler because it has bundled everything you need in one installer including QT 4.3 open source edition. Now all you need to do is install Python and the PyQt bundle. Immediately following are the updated steps. Below that is the old instructions. Update 7/1/2008: Updated for PyQt 4.4.2 NEW INSTRUCTIONS Here are the steps to install and create a simple "Hello World" GUI application using PyQt 4.4, and Python 2.5, on Windows. Install Python 2.5 - Go to and click on "Python 2.5.x Windows installer" - Save and run the Windows installer - Go through the steps and accept the defaults. Install the PyQt 4.4 bundle (including QT 4.4) - Go to and select the "PyQt-Py2.5-gpl-4.4.2-1.exe" link. - Save and run the file. - Go through the steps and accept the defaults.. A window with a single push button should pop up. For more examples, go to "Start" -> "All Programs" -> "PyQt GPL v4.4.2 for Python v2.5" > "Examples" > "PyQt Examples Source" (For a default installation, this is also located at C:\Python25\PyQt4\examples.) To start, look in the "tutorial" directory. OLD INSTRUCTIONSHere are the steps to install and create a simple "Hello World" GUI application using PyQt 4.1.1, Python 2.5, and QT 4.2.2 Open Source edition (GPL) on Windows XP with the MinGW compiler. Install Python 2.5 - Go to and click on "Python 2.5 Windows installer" - Save and run the Windows installer - Go through the steps and accept the defaults. Install MinGW - Go to - Download the following "bin" files from the "Current" section: - gcc-core-3.4.2-20040916-1.tar.gz - gcc-g++-3.4.2-20040916-1.tar.gz - mingw-runtime-3.9.tar.gz - w32api-3.6.tar.gz - Extract all the files to "c:\mingw" Install QT 4.2.2 Open Source edition - Go to the Open Source download page at. Note there is also an Evaluation version. This is *not* the one you want. - Under the "Download" heading, select the "" link. - Go through the steps and accept the defaults. - When you get to the MinGW page, leave the "Download and install minimal MinGW installation" box unchecked and make sure the location of the MinGW installation is set to "c:\mingw". Click "Install". - You will get an error message which says that the installer could not find a valid "w32api.h" file. You can install the 3.2 version from the mingw site, but the 3.6 version works. Click "Yes" to continue. Click "Finish" to finish the installation. Install PyQt 4.1.1 - Go to and select the "PyQt-gpl-4.1.1-Py2.5-Qt4.2.2.exe" link. - Save and run the file. - Go through the steps and accept the defaults. Check your Environment Variables - Right-click on "My Computer" and select "Properties" - Click the "Advanced" tab - Click "Environment Variables" - The following variables should be set: - user variable QTDIR - "c:\qt\4.2.2" - user variable QMAKESPEC - "win32-g++" - system variable PATH - include "C:\Qt\4.2.2\bin;C:\Python25\Scripts;C:\Python25;C:\Python25\DLLs;" perfect instruction. THX The example line: from Qt import * should be: from PyQt4.Qt import * I didn't need to manually install mingw. The qt-win-opensource-4.2.2-mingw.exe 'Download and install minimal MingGW' option worked just fine. This seems simpler to me. Thank you for this post/guide. I'm glad the post was helpful. For me, "from PyQt4.Qt import *" didn't work. I'm not sure what the difference is. Also, it is great if the 'Download and install minimal MingGW' option works for you. I think maybe it didn't work for me because I am behind a proxy. Thanks for these instructions, had some trouble getting the environment set up correctly. Also, this works with Qt 4.2.3 as well Tarjei, Glad it works for Qt 4.2.3 as well. What problems did you have setting up the environment? -sofeng I updated the example code to use the new module import line: from PyQt4.QtGui import * instead of the old line: from Qt import * Sorry it took so long to fix this. Looks like the installation process has become much easier because the pyqt package gives everything you need to start developing once python is installed. Anonymous: I noticed this also. I'll update the post. -sofeng Hi, Your instructions were a life saver. Did you know your the only one on the net with simple instructions? Everywhere else its all "compile this, make install that, etc etc.. " I think its b/c windows is neglected... Ok, now for my request. I ran the hello world app at the end of your instructions, but I don't know what to do from there. Can you add information like: 1) when running the hellow world app, a black python command prompt always comes up as a second window, I don't want that showing up in my apps, how do I get around that? 2) I noticed that eric4 is included in the riverbank pack. But where do I go to use it? all i see in the directory are some eric .bat files, but none open the IDE. 3) How do I package my applications up for distribution to end users? How do I define little icon that shows up in the top left corner of the app? Thanks again for the awesome instructions. Anonymous: Thanks, I'm glad the instructions were helpful. Here are my responses to your questions. To run a python program without the Console window popping up, rename your .py file with a .pyw extension. You can also use pythonw.exe instead of python.exe to run your program. I got an error message when trying to run the eric IDE also. Maybe you can try installing the latest version of Eric4. Personally, I use py2exe to package my applications. It does a good job. I think there are other alternatives as well. To change the icon in your application window, see the documentation for QWidget's windowIcon property:. For example, to add an icon to the hello world example: import sys from PyQt4.QtGui import * app = QApplication(sys.argv) button = QPushButton("Hello World", None) icon = QIcon("c:/path/to/myicon.png") button.setWindowIcon(icon) button.show() app.exec_() Hi its me again from the previous post. After some research, I found why eric is broken: a couple bugs in the installer from riverbank. 1) it forces the docs to be installed in C:\Program Files\PyQt4\ even though the default installation directory is in C:\Python25. I am unsure why it would do this, but it does. So all the tutorials are broken and none work from the start menu. I still can't get the tutorials to work even after copying them to the python25 directory. 2) The riverbank installer does not account for eric4 to load unless you install python in the default directory(C:\python25), and then pyqt4 in the default directory(the python25 direactory}. Even if you are very careful, it still breaks. So you might want to add to your instructions how important it is to use the default directory. Once I uninstalled it all, and reinstalled it with the default directory, eric4 works by starting it from the start menu. I don't really know who to report this bug to. I emailed riverbank about it. That is about all I have right now. Thanks again for the fast response and the tip to use .pyw. and the icon trick. I appreciate it. Also, do you have any pointers of people using pyqt GPL on windows to do any cool utilities out there? I just can't seem to find many people out there doing it on windows (mostly they are on linux) What about SIP? Does this package have that covered? My Bad I still did it the very hard way... SIP dev tools are included! Thank you for this post. It saved a lot of time.. That was easy. Thank you! I have compiled a python(PyQt) script using cython(pure python mode) using gcc. importing the module is not showing any error but executing it showing an unhandled win32 exception. Here are the details: PyQt 4.6.1 Python 2.6.2 Win XP 32 Cython-0.11.2.win32-py2.6 qt-sdk-win-opensource-2009.03.1.exe Not so good in c/c++ stuff. Any pointers will be greatly appreciated. Thanks. Prashant Hi, I have python 2.6 installed. I downloaded PyQt-Py2.6-gpl-4.7.3-2 and installed that. When i execute the sample program I get the error message - Traceback (most recent call last): File "C:/Python26/Code/sample_pyqt.pyw", line 4, in <module> from PyQt4.QtGui import * ImportError: DLL load failed: The specified module could not be found. The QtGui4 DLL is present in the PyQt4 bin. Don't know the reason for this error. Have any idea why this is happening? Any help would be highly appreciated. Thanks. I'm also getting the problem mentioned in comment #18, with Python 2.6.5 and PyQt 4.7.3-2 on Windows 7 64 bit. This problem was also brought up in the Riverbank PyQt mailing list (), but with no resolution. I'm using Window7 64-bit and so I've downloaded and installed PYthpn 2.6.4 64-bit but when i installed pyqt26,i could not import from the interactive shell. I solved this problem via following steps: 1)I removed python2.6.4 64 bit. 2)I installed python 2.6.4 32 bit upto now i'm using pyqt and python properly :) thank you so much for your explanations. Hi , I am a new comer to PyQt world , just install and run your "Hello World" sample. Really a nice example and it work as a Charm . thanks my dear author for this good contribution. Thankfully Anes P.A Hi, I'd love to use PyQt4 but any Windows install that i can find on the riverbank's website just fails on either XP 32 or Win7 64: "not a valid Win32 app". Any help would be appreciated! Same problem as #18 and #19. I had python 32 bits with Pyqt 64 bits + python path was incorrect in windows Path environnement variable. Replaced python 32 with 64, adding the good path... It works now :p Thanks for the tutorials.
https://www.saltycrane.com/blog/2007/01/how-to-install-pyqt-41-python-25-and-qt_8340/
CC-MAIN-2019-47
en
refinedweb
Forest fire prediction using sensors and LoRa communications Dependencies: X_NUCLEO_IKS01A2 trace_helper.h - Committer: - spadala - Date: - 5 months ago - Revision: - 51:925c07d0d7cf - Parent: - 0:7037ed05f54 APP_TRACE_HELPER_H_ #define APP_TRACE_HELPER_H_ /** * Helper function for the application to setup Mbed trace. * It Wouldn't do anything if the FEATURE_COMMON_PAL is not added * or if the trace is disabled using mbed_app.json */ void setup_trace(); #endif /* APP_TRACE_HELPER_H_ */
https://os.mbed.com/users/spadala/code/ForestSafe/file/925c07d0d7cf/trace_helper.h/
CC-MAIN-2019-47
en
refinedweb
import java.util.HashMap; class Scratch { public static void main(String[] args) { HashMap<String, Integer> frequencyMap = new HashMap<>(); frequencyMap.put("word1", 10); frequencyMap.put("word2", 20); frequencyMap.put("word3", 30); System.out.println("frequencyMap = " + frequencyMap); frequencyMap.put("word2", 25); (1) System.out.println("frequencyMap = " + frequencyMap); } } What is difference between HashMap and HashSet Carvia Tech | May 25, 2019 | 2 min read | 85 views Both HashMap and HashSet are part of Java Collections Framework. - HashMap HashMap is essentially a Hash table based implementation of Map interface. It permits null values and null key. Duplicate keys are not allowed in map. - HashSet This class is an implementation of Set interface, backed by a HashMap instance. Set by definition does not allow duplicate values. Both these classes require its keys to implement equals and hashcode method to work properly. Difference b/w the two Here is the summary of difference between the two classes: Code samples frequencyMap = {word1=10, word3=30, word2=20} frequencyMap = {word1=10, word3=30, word2=25} import java.util.HashSet; class Scratch { public static void main(String[] args) { HashSet<String> dictionary = new HashSet<>(); dictionary.add("word1"); dictionary.add("word2"); dictionary.add("word3"); System.out.println("dictionary = " + dictionary); dictionary.add("word2"); System.out.println("dictionary = " + dictionary); dictionary.add(null); dictionary.add(null); System.out.println("dictionary = " + dictionary); } } dictionary = [word1, word3, word2] dictionary = [word1, word3, word2] dictionary = [null, word1, word3, word2] Top articles in this category: - Difference between HashMap and ConcurrentHashMap - What is difference between Vector and ArrayList, which one shall be preferred - What is difference between JDK JRE and JVM - What is difference between sleep() and wait() method in Java? - What is difference between Callable and Runnable Interface? - Can the keys in HashMap be mutable - Discuss internals of a ConcurrentHashmap (CHM).
https://www.javacodemonk.com/what-is-difference-between-hashmap-and-hashset-c709f51c
CC-MAIN-2019-47
en
refinedweb
tt_message_context_ival(library call) tt_message_context_ival(library call) NAME [Toc] [Back] tt_message_context_ival - retrieve the integer value of a message's context SYNOPSIS [Toc] [Back] #include <Tt/tt_c.h> Tt_status tt_message_context_ival( Tt_message m, const char *slotname, int *value); DESCRIPTION [Toc] [Back] The tt_message_context_ival function retrieves the integer value of a message's context. The m argument is the opaque handle for the message involved in this operation. The slotname argument describes the context of this message. The value argument points to the location to return the value. If there is no context slot associated with slotname, tt_message_context_ival returns a NULL pointer in *value. RETURN VALUE [Toc] [Back] Upon successful completion, the tt_message_context_ival function returns the status of the operation as one of the following Tt_status values: TT_OK The operation completed successfully. TT_ERR_NOMP The ttsession(1) process is not running and the ToolTalk service cannot restart it. TT_ERR_NUM The integer value passed was invalid (out of range). TT_ERR_POINTER The pointer passed does not point to an object of the correct type for this operation. TT_ERR_SLOTNAME The specified slotname is syntactically invalid. TT_WRN_NOTFOUND The named context does not exist on the specified message. APPLICATION USAGE [Toc] [Back] The application can use tt_free(3) to free any data stored in the address returned by the ToolTalk API. - 1 - Formatted: January 24, 2005 tt_message_context_ival(library call) tt_message_context_ival(library call) SEE ALSO [Toc] [Back] Tt/tt_c.h - Tttt_c(5), tt_free(3). - 2 - Formatted: January 24, 2005
http://nixdoc.net/man-pages/HP-UX/man3/tt_message_context_ival.3.html
CC-MAIN-2019-47
en
refinedweb
Provided by: erlang-manpages_22.0.7+dfsg-1build1_all NAME global_group - Grouping nodes to global name registration groups. DESCRIPTION This module makes it possible to partition the nodes of a system into global groups. Each global group has its own global namespace, see global(3erl). The main advantage of dividing systems into global groups is that the background load decreases while the number of nodes to be updated is reduced when manipulating globally registered names. The Kernel configuration parameter global_groups defines the global groups (see also kernel(7) and config. DATA()} EXPORTS global_groups() -> {GroupName, GroupNames} | undefined Types: GroupName = group_name() GroupNames = [GroupName] Returns a tuple containing the name of the global group that the local node belongs to, and the list of all other known group names. Returns undefined if no global groups are defined. info() -> [info_item()] Types:, State is equal to synced. If no global groups are defined, State that have subscribed to nodeup and nodedown messages. monitor_nodes(Flag) -> ok Types: Flag = boolean() Depending on Flag, the calling process starts subscribing (Flag equal to true) or stops subscribing (Flag equal to false) to node status change messages. A process that has subscribed receives the messages {nodeup, Node} and {nodedown, Node} when a group node connects or disconnects, respectively. own_nodes() -> Nodes Types: Nodes = [Node :: node()] Returns the names of all group nodes, regardless of their current status. registered_names(Where) -> Names Types: Where = where() Names = [Name :: name()] Returns a list of all names that are globally registered on the specified node or in the specified global group. send(Name, Msg) -> pid() | {badarg, {Name, Msg}} send(Where, Name, Msg) -> pid() | {badarg, {Name, Msg}} Types: Where = where() Name = name() Msg = term() Searches for Name, globally registered on the specified node or in the specified global group, or (if argument Where is not provided) in any global group. The global groups are searched in the order that they appear in the value of configuration parameter global_groups. If Name is found, message Msg is sent to the corresponding pid. The pid is also the return value of the function. If the name is not found, the function returns {badarg, {Name, Msg}}.erl). Returns {error, {'invalid global_groups definition', Bad}} if configuration parameter global_groups has an invalid value Bad. whereis_name(Name) -> pid() | undefined whereis_name(Where, Name) -> pid() | undefined Types: Where = where() Name = name() Searches for Name, globally registered on the specified node or in the specified global group, or (if argument Where is not provided) in any global group. The global groups are searched in the order that they appear in the value of configuration parameter global_groups. If Name. SEE ALSO global(3erl), erl(1)
http://manpages.ubuntu.com/manpages/eoan/man3/global_group.3erl.html
CC-MAIN-2019-47
en
refinedweb
Patching Libraries to Instrument Downstream Calls To instrument downstream calls, use the X-Ray SDK for Python to patch the libraries that your application uses. The X-Ray SDK for Python can patch. When you use a patched library, the X-Ray SDK for Python creates a subsegment for the call and records information from the request and response. A segment must be available for the SDK to create the subsegment, either from the SDK middleware or from AWS Lambda. Note If you use SQLAlchemy ORM, you can instrument your SQL queries by importing the SDK's version of SQLAlchemy's session and query classes. See Use SQLAlchemy ORM for instructions. To patch all available libraries, use the patch_all function in aws_xray_sdk.core. Some libraries, such as httplib and urllib, may need to enable double patching by calling patch_all(double_patch=True). Example main.py – patch all supported libraries import boto3 import botocore import requests import sqlite3 from aws_xray_sdk.core import xray_recorder from aws_xray_sdk.core import patch_all patch_all() To patch individual libraries, call patch with a tuple of library names. Example main.py – patch specific libraries import boto3 import botocore import requests import mysql-connector-python from aws_xray_sdk.core import xray_recorder from aws_xray_sdk.core import patch libraries = ('botocore', 'mysql') patch(libraries) Note In some cases, the key that you use to patch a library does not match the library name. Some keys serve as aliases for one or more libraries. Libraries Aliases httplib– httpliband http.client mysql– mysql-connector-python Tracing Context for Asynchronous Work For asyncio integrated libraries, or to create subsegments for asynchronous functions, you must also configure the X-Ray SDK for Python with an async context. Import the AsyncContext class and pass an instance of it to the X-Ray recorder. Note Web framework support libraries, such as AIOHTTP, are not handled through the aws_xray_sdk.core.patcher module. They will not appear in the patcher catalog of supported libraries. Example main.py – patch aioboto3 import asyncio import aioboto3 import requests from aws_xray_sdk.core.async_context import AsyncContextfrom aws_xray_sdk.core import xray_recorder xray_recorder.configure(service='my_service', context=AsyncContext())from aws_xray_sdk.core import patch libraries = ('aioboto3') patch(libraries)
https://docs.aws.amazon.com/xray/latest/devguide/xray-sdk-python-patching.html
CC-MAIN-2019-47
en
refinedweb
The stdio C library function int fsetpos(FILE *stream, const fpos_t *pos); sets the current position in the stream to the position represented by pos. The argument pos is a pointer of type fpos_t object whose value was previously obtained by a call to fgetpos function. Function prototype of fsetpos - stream : A pointer to a FILE object which identifies a stream. - pos : This is a Pointer to a fpos_t object containing a position previously obtained by calling fgetpos function. Return value of fsetpos This function returns zero on success. In case of error it returns a non-zero value and sets the global variable errno to a system-specific positive value. C program to show the use of fsetpos function The following program shows the use of fgetpos function to get the position of first character in file and store it in a variable of type fpos_t. Later we will use this value to reset the file position indicator to point to first character of the file using fsetpos function. As a result, below program first prints the first four characters of the file then it again starts printing from the first character of the file. #include <stdio.h> int main(){ FILE *file; int ch, counter; fpos_t position; file = fopen("textFile.txt","r"); if (file==NULL){ perror("Error: Unable to open a file"); return(1); } /* Storing position of first character in file */ fgetpos(file, &position); for (counter = 0; counter < 23; counter++){ if(counter == 4){ /* Resetting file position pointer to starting position*/ fsetpos(file, &position); } ch = fgetc(file); printf("%c",ch); } fclose(file); return 0; } Output TechTechCrashCourse.com
http://www.techcrashcourse.com/2015/08/fsetpos-stdio-c-library-function.html
CC-MAIN-2017-39
en
refinedweb
mxss Attacks: Attacking well-secured Web-Applications by using innerhtml Mutations - Calvin Hudson - 2 years ago - Views: Transcription 1 mxss Attacks: Attacking well-secured Web-Applications by using innerhtml Mutations Mario Heiderich Horst Goertz Institute for IT Security Ruhr-University Bochum, Germany Jörg Schwenk Horst Goertz Institute for IT Security Ruhr-University Bochum, Germany Jonas Magazinius Edward Z. Yang Chalmers University of Stanford University, USA Technology, Sweden Tilman Frosch Horst Goertz Institute for IT Security Ruhr-University Bochum, Germany- X 13, November 04 08, 2013, Berlin, Gernany. Copyright 2013 ACM /13/11...$ Figure 1: Information flow in an mxss attack. 1. INTRODUCTION Mutation-based Cross-Site-Scripting (mxss). Server- and client-side XSS filters share the assumption that their HTML output and the browser-rendered HTML content are mostly identical. In this paper, we show how this premise is false for important classes of web applications that use the innerhtml property to process user-contributed content. Instead, this very content is mutated by the browser, such that a harmless string that passes nearly all of the deployed XSS filters is subsequently transformed into an active XSS attack vector by the browser layout engine itself. The information flow of an mxss attack is shown in Figure 1: The attacker carefully prepares an HTML or XML formatted string and injects it into a web application. This string will be filtered or even rewritten in a server-side XSS filter, and will then be passed to the browser. If the browser contains a client-side XSS filter, the string will be checked again. At this point, the string is still harmless and cannot be used to execute an XSS attack. However, as soon as this string is inserted into the brower s DOM by using the innerhtml property, the browser will mutate the string. This mutation is highly unpredictable since it is not part of the specified innerhtml handling, but is a proprietary optimization of HTML code implemented differently in each of the major browser families. The mutated 2 Description Backtick Characters breaking Attribute Delimiter Syntax XML Namespaces in Unknown Elements causing Structural Mutation Backslashes in CSS Escapes causing String-Boundary Violation Misfit Characters in Entity Representation breaking CSS Strings CSS Escapes in Property Names violating entire HTML Structure Entity-Mutation in non-html Documents Entity-Mutation in non-html context of HTML documents Section Table 1: Overview on the mxss vectors discussed in this paper string now contains a valid XSS vector, and the attack will be executed on rendering of the new DOM element. Both server- and client side filters were unable to detect this attack because the string scanned in these filters did not contain any executable code. Mutation-based XSS (mxss) makes an impact on all three major browser families (IE, Firefox, Chrome). Table 1 gives an overview on the mxss subclasses discovered so far, and points to their detailed description. A web application is vulnerable if it inserts user-contributed input with the help of innerhtml or related properties into the DOM of the browser. It is difficult to statistically evaluate the number of websites affected by the seven attack vectors covered in this paper, since automated testing fails to reliably detect all these attack prerequisites: If innerhtml is only used to insert trusted code from the web application itself into the DOM, it is not vulnerable. However, it can be stated that amongst the most popular web pages, roughly one third uses the innerhtml property, and about 65% use Java- Script libraries like jquery [7], who abet mxss attacks by using the innerhtml property instead of the corresponding DOM methods. However, it is possible to single out a large class of vulnerable applications (Webmailers) and name high-profile stateof-the-art XSS protection techniques that can be circumvented with mxss. Thus the alarm we want to raise with this paper is that an important class of web applications is affected, and that nealy all XSS mitigation techniques fail Webmail Clients. Webmail constitutes a class of web applications particularly affected by mutation-based XSS: nearly all of them (including e.g. Microsoft Hotmail, Yahoo! Mail, Rediff Mail, OpenExchange, Roundcube and other tools and providers) were vulnerable to the vectors described in this paper. These applications use the innerhtml property to display usergenerated HTML content. Before doing so, the content is thoroughly filtered by server-side anti-xss libraries in recognition of the dangers of a stored XSS attack. The vectors described in this paper will pass through the filter because the HTML string contained in the body does not form a valid XSS vector but would require only a single innerhtml access to be turned into an attack by the browser itself. Here the attacker may submit the attack vector within the HTML-formatted body of an . Most webmail clients do not use innerhtml to display this in the browser, but a simple click on the Reply button may trigger the attack: to attach the contents of the mail body to the reply being edited in the webmail client, mostly innerhtml access is used. HTML Sanitizers. We analysed a large variety of HTML sanitizers such as HTML Purifier, htmlawed, OWASP AntiSamy, jsoup, kses and various commercial providers. At the time of testing, all of them were (and many of them still are) vulnerable against mxss attacks. Although some of the authors reacted with solutions, the major effort was to alert the browser vendors and trigger fixes for the innerhtml-transformations. In fact, several of our bug reports have led to subsequent changes in browser behavior. To protect users, we have decided to anonymise names of several formerly affected browsers and applications used as examples in our work. This paper makes the following contributions: 1. We identify an undocumented but long-existing threat against web applications, which enables an attacker to conduct XSS attacks, even if strong server- and client-side filters are applied. This novel class of attack vectors utilize performance-enhancement peculiarities present in all major browsers, which mutate a given HTML string before it is rendered. We propose the term mxss (for Mutation-based XSS) to describe this class of attacks to disambiguate and distinguish them from classic, reflected, persistent and DOM-based XSS attacks. 2. We discuss client- and server-side mitigation mechanisms. In particular, we propose and evaluate an inbrowser protection script, entirely composed in Java- Script, which is practical, feasible and has low-overhead. With this script, a web application developer can implement a fix against mxss attacks without relying on server-side changes or browser updates. The script overwrites the getter methods of the DOM properties we identified as vulnerable and changes the HTML handling into an XML-based processing, thereby effectively mitigating the attacks and stopping the mutation effects We evaluated this attack in three ways: first, we analyzed the attack surface for mxss and give a rough estimate the number of vulnerable applications on the Internet; second, we conducted a field study testing commonly used web applications such as Yahoo! Mail and other high profile websites, determining whether they could be subjected to mxss attacks; third, we have examined common XSS filter software such as AntiSamy, HTML Purifier, Google Caja and Blueprint for mxss vulnerabilities, subsequently reporting our findings back to the appropriate tool s author(s). 1 In result, one can purposefully choose XML-based processing for security-critical sites and HTML-based processing for performance-critical sites. 3 2. PROBLEM DESCRIPTION In the following sections, we describe the attack vectors which arise from the use of the innerhtml property in websites. We will outline the history of findings and recount a realistic attack scenario. The problems we identify leave websites vulnerable against the novel kind of mxss attacks, even if the utilized filter software fully protects against the dangers of the classic Cross-Site Scripting. 2.1 The innerhtml Property Originally introduced to browsers by Microsoft with Internet Explorer 4, the property quickly gained popularity among web developers and was adopted by other browsers, despite being non-standard. The use of innerhtml and outerhtml is supported by each and every one of the commonly used browsers in the present landscape. Consequently, the W3C started a specification draft to unify innerhtml rendering behaviors across browser implementations [20]. An HTML element s innerhtml property deals with creating HTML content from arbitrarily formatted strings on write access on the one hand, and with serializing HTML DOM nodes into strings on read access on the other. Both directions are relevant in scope of our paper the read access is necessary to trigger the mutation while the write access will attach the transformed malicious content to the DOM. The W3C working draft document, which is far from completion, describes this process as generation of an ordered set of nodes from a string valued attribute. Due to being attached to a certain context node, if this attribute is evaluated, all children of the context node are replaced by the (ordered) node-set generated from the string. To use innerhtml, the DOM interface of element is enhanced with an innerhtml attribute/property. Setting of this attribute can occur via the element.innerhtml=value syntax, and in this case the attribute will be evaluated immediately. A typical usage example of innerhtml is shown in Listing 1: when the HTML document is first rendered, the <p> element contains the "First text" text node. When the anchor element is clicked, the content of the <p> element is replaced by the "New <b>second</b> text." HTML formatted string. Listing 1: Example on innerhtml usage <script type =" text / javascript "> var First text. </p> <a href =" javascript : Change ()"> Change text above! </a> outerhtml displays similar behavior with single exception: unlike in the innerhtml case, the whole context (not only the content of the context node) will be replaced here. The innerhtml-access changes the utilized markup though for several reasons and in differing ways depending on the user agent. The following code listings show some (non securityrelated) examples of these performance optimizations: Listing 2: Examples for internal HTML mutations to save CPU cycles <!-- User Input --> <s class ="" > hello &# x20 ;<b>goodbye </b> <!-- Browser - transformed Output --> <S> hello <B>goodbye </B ></S> The browser in this case Internet Explorer 8 mutates the input string in multiple ways before sending it to the layout engine: the empty class is removed, the tag names are set to upper-case, the markup is sanitized and the HTML entities are resolved. These transformations happen in several scenarios: 1. Access to the innerhtml or outerhtml properties of the affected or parent HTML element nodes; 2. Copy (and subsequent paste) interaction with the HTML data containing the affected nodes; 3. HTML editor access via the contenteditable attribute, the designmode property or other DOM method calls like document.execcommand(); 4. Rendering the document in a print preview container or similar intermediate views. Browsers tend to use the outerhtml property of the HTML container or the innerhtml. For the sake of brevity, we will use the term innerhtmlaccess to refer to some or all of the items from the above list. 2.2 Problem History and Early Findings In 2006, a non-security related bug report was filed by a user, noting an apparent flaw in the print preview system for HTML documents rendered by a popular web browser. Hasegawa s 2007 analysis [11] of this bug report showed that once the innerhtml property of an element s container node in an HTML tree was accessed, the attributes delimited by backticks or containing values starting with backticks were replaced with regular ASCII quote delimiters: the content had mutated. Often the regular quotes disappeared, leaving the backtick characters unquoted and therefore vulnerable to injections. As Hasegawa states, an attacker can craft input operational for bypassing XSS detection systems because of its benign nature, yet having a future possibility of getting transformed by the browser into a code that executes arbitrary JavaScript code. An example vector is being discussed in Section 3.1. This behavior constitutes a fundamental basis for our research on the attacks and mitigations documented in this paper. 2.3 Mutation-based Cross-Site Scripting Certain websites permit their users to submit inactive HTML aimed at visual and structural improvement of the content they wish to present. Typical examples are webmailers (visualization of HTML-mail content provided by the sender of the ) or collaborative editing of complex HTML-based documents (HTML content provided by all editors). To protect these applications and their users from XSS attacks, website owners tend to call server-side HTML filters like e.g. the HTML Purifier, mentioned in Section 5.1, for 4 assistance. These HTML filters are highly skilled and configurable tool-kits, capable of catching potentially harmful HTML and removing it from benign content. While it has become almost impossible to bypass those filters with regular HTML/Javascript strings, the mxss problem has yet to be tackled by most libraries. The core issue is as follows: the HTML markup an attacker uses to initiate an mxss attack is considered harmless and contains no active elements or potentially malicious attributes the attack vector examples shown in Section 3 demonstrate that. Only the browser will transform the markup internally (each browser family in a different manner), thereby unfolding the embedded attack vector and executing the malicious code. As previously mentioned, such attacks can be labeled mxss XSS attacks that are only successful because the attack vector is mutated by the browser, a result of behavioral mishaps introduced by the internal HTML processing of the user agents. 3. EXPLOITS The following sections describe a set of innerhtml-based attacks we discovered during our research on DOM mutation and string transformation. We present the code purposefully appearing as sane and inactive markup before the transformation occurs, while it then becomes an active XSS vector executing the example method xss() after that said transformation. This way server- and client-side XSS filters are being elegantly bypassed. The code shown in Listing 3 provides one basic example of how to activate (Step 2 in the chain of events described in Section 4) each and any of the subsequently following exploits it simply concatenates an empty string to an existing innerhtml property. The exploits can further be triggered by the DOM operations mentioned in Section 2.2. Any innerhtml-access mentioned in the following sections signifies a reference to a general usage of the DOM operations framed by this work. Listing 3: Code-snippet illustrating the minimal amount of DOM-transaction necessary to cause and trigger mxss attacks <script > window. onload = function (){ document. body. innerhtml += ; } </ script > We created a test-suite to analyze the innerhtml transformations in a systematic way; this tool was later published on a related website dedicated to HTML and HTML5 security implications 2. The important innerhtml-transformations are highlighted in the code examples to follow. 3.1 Backtick Characters breaking Attribute Delimiter Syntax This DOM string-mutation and the resulting attack technique was first publicly documented in 2007, in connection with the original print-preview bug described in Section 2.2. Meanwhile, the attack can only be used in legacy browsers as their modern counterparts have deployed effective fixes against this problem. Nevertheless, the majority of tested 2 innerhtml Test-Suite, innerhtml, 2012 web applications and XSS filter frameworks remain vulnerable against this kind of attack albeit measurable existence of a legacy browser user-base. The code shown in Listing 4 demonstrates the initial attack vector and the resulting transformation performed by the browser engine during the processing of the innerhtml property. Listing 4: innerhtml-access to an element with backtick attribute values causes JavaScript execution <img src =" test. jpg " alt ="``onload=xss()" /> <IMG alt =``onload=xss() 3.2 XML Namespaces in Unknown Elements causing Structural Mutation A browser that does not yet support the HTML5 standard is likely to interpret elements such as article, aside, menu and others as unknown elements. A developer can decide how an unknown element is to be treated by the browser: A common way to pass these instructions is to use the xmlns attribute, thus providing information on which XML namespace the element is supposed to reside on. Once the xmlns attribute is being filled with data, the visual effects often do not change when compared to none or empty namespace declarations. However, once the innerhtml property of one of the element s container nodes is being accessed, a very unusual behavior can be observed. The browser prefixes the unknown but namespaced element with the XML namespace that in itself contains unquoted input from the xmlns attribute. The code shown in Listing 5 demonstrates this case. Listing 5: innerhtml-access to an unknown element causes mutation and unsolicited JavaScript execution < article123 <img src=x onerror=xss()//:article123 </ img src =x onerror = xss () //: article > The result of this structural mutation and the pseudonamespace allowing white-space is an injection point. It is through this point that an attacker can simply abuse the fact that an attribute value is being rendered despite its malformed nature, consequently smuggling arbitrary HTML into the DOM and executing JavaScript. This problem was reported and fixed in the modern browsers. A similar issue was discovered and published by Silin 3. 3 Silin, A., XSS using xmlns attribute in custom tag when copying innerhtml, 97, Dec. 2011 5 3.3 Backslashes in CSS Escapes causing String- Boundary Violation To properly escape syntactically relevant characters in CSS property values, the CSS1 and CSS2 specifications propose CSS escapes. These cover the Unicode range and allow to, for instance, use the single-quote character without risk. This is possible even inside a CSS string that is delimited by single quotes. Per specification, the correct usage for CSS escapes inside CSS string values would be: property: v\61 lue. The escape sequence is representing the a character, based on its position in the ASCII table of characters. Unicode values can be represented by escaping sequences such as \20AC for the glyph, to give one example. Several modern browsers nevertheless break the security promises indicated by the correct and standards-driven usage of CSS escapes. In particular, it takes place for the innerhtml property of a parent element being accessed. We observed a behavior that converted escapes to their canonical representation. The sequence property: val\27ue would result in the innerhtml representation PROPERTY: val ue. An attacker can abuse this behavior by injecting arbitrary CSS code hidden inside a properly quoted and escaped CSS string. This way HTML filters checking for valid code that observes the standards can be bypassed, as depicted in Listing 6. Listing 6: innerhtml-access to an element using CSS escapes in CSS strings causes JavaScript execution <p style =" font - family : ar\27\3bx\3a expression\28xss\28\29\29\3bial " > </ p > <P style =" FONT - FAMILY : ar ;x:expression(xss());ial " > </P> Unlike the backtick-based attacks described in Section 3.1, this technique allows recursive mutation. This means that, for example, a double-escaped or double-encoded character will be double-decoded in case that innerhtml-access occurs twice. More specifically, the \5c 5c escape sequence will be broken down to the \5c sequence after first inner- HTML-access, and consequently decoded to the \ character after the second innerhtml-access. During our attack surface s evaluation, we discovered that some of the tested HTML filters could be bypassed with the use of &#amp;x5c 5c 5c 5c or alike sequences. Due to the backslashes presence allowed in CSS property values, the HTML entity representation combined with the recursive decoding feature had to be employed for code execution and attack payload delivery. The attacks that become possible through this technique range from overlay attacks injecting otherwise unsolicited CSS properties (such as positioning instructions and negative margins), to arbitrary JavaScript execution, font injections (as described by Heiderich et al. [14]), and the DHTML behavior injections for levering XSS and ActiveX-based attacks. 3.4 Misfit Characters in Entity Representation breaking CSS Strings Combining aforementioned exploit with enabling CSS-escape decoding behavior results in yet another interesting effect observable in several browsers. That is, when both CSS escape and the canonical representation for the double-quote character inside a CSS string are used, the render engine converts them into a single quote, regardless of those two characters seeming unrelated. This means that the \22, ", " and " character sequences will be converted to the character upon innerhtml-access. Based on the fact that both characters have syntactic relevance in CSS, the severity of the problems arising from this behavior is grand. The code example displayed in Listing 7 shows a mutation-based XSS attack example. To sum up and underline once again, it is based on fully valid and inactive HTML and CSS markup that will unfold to active code once the innerhtml-access is involved. Listing 7: innerhtml-access to an element using CSS strings containing misfit HTML entities causes JavaScript execution <p style =" font - family : ar";x= expression ( xss ())/* ial " > </p> <P style =" FONT - FAMILY : ar ;x= expression ( xss ())/* ial " > </P> We can only speculate about the reasons for this surprising behavior. One potential explanation is that in case when the innerhtml transformation might lead the \22, ", " and " sequences to be converted into the actual double-quote character ( ), then given that the attribute itself is being delimited with double-quotes an improper handling could not only break the CSS string but even disrupt the syntactic validity of the surrounding HTML. An attacker could abuse that to terminate the attribute with a CSS escape or HTML entity, and, afterwards, inject crimson HTML to cause an XSS attack. Our tests showed that it is not possible to break the HTML markup syntax with CSS escapes once used in a CSS string or any other CSS property value. The mutation effects only allow CSS strings to be terminated illegitimately and lead to an introduction of new CSS property-value pairs. Depending on the browser, this may very well lead to an XSS exploit executing arbitrary JavaScript code. Supporting this theory, the attack technique shown in Section 3.5 considers markup integrity but omits CSS string sanity considerations within the transformation algorithm of HTML entities and CSS escapes. 3.5 CSS Escapes in Property Names violating entire HTML Structure As mentioned in Section 3.4, an attacker cannot abuse mutation-based attacks to break the markup structure of the document containing the style attribute hosting the CSS escapes and entities. Thus far, the CSS escapes and entities were used exclusively in CSS property values and not in the property names. Applying the formerly discussed techniques to CSS property names instead of values forces some browsers into a completely different behavior, as demonstrated in Listing 8. 6 Listing 8: innerhtml-access to an element with invalid CSS property names causes JavaScript execution <img style =" font -fa\22onload\3dxss\28\29\20mily : arial " src =" test. jpg " /> < IMG Creating a successful exploit, which is capable of executing arbitrary JavaScript, requires an attacker to first terminate the style attribute by using a CSS escape. Therefore, the injected code would trigger the exploit code while it still follows the CSS syntax rules. Otherwise, the browser would simply remove the property-value pair deemed invalid. This syntax constraint renders several characters useless for creating exploits. White-space characters, colon, equals, curly brackets and the semi colon are among them. To bypass the restriction, the attacker simply needs to escape those characters as well. We illustrate this in Listing 8. By escaping the entire attack payload, the adversary can abuse the mutation feature and deliver arbitrary CSS-escaped HTML code. Note that the attack only works with the double-quote representation inside double-quoted attributes. Once a website uses single-quotes to delimit attributes, the technique can be no longer applied. The innerhtml-access will convert single quotes to double quotes. Then again, the \22 escape sequence can be used to break and terminate the attribute value. The code displayed in Listing 9 showcases this yet again surprising effect. Listing 9: Example for automatic quote conversion on innerhtml-access <!-- Example Attacker Input --> <p style = fo\27\22o:bar > <!-- Example Browser Output --> <P style =" fo "o: bar "></P> 3.6 Entity-Mutation in non-html Documents Once a document is being rendered in XHTML/XML mode, different rules apply to the handling of character entities, non-wellformed content including unquoted attributes, unclosed tags and elements, invalid elements nesting and other aspects of document structure. A web-server can instruct a browser to render a document in XHTML/XML by setting a matching MIME type via Content-Type HTTP headers; in particular the MIME text/xhtml, text/xml, application/xhtml+xml and application/xml types can be employed for this task (more exotic MIME types like image/svg +xml and application/vnd.wap.xhtml+xml can also be used). These specific and MIME-type dependent parser behaviors cause several browsers to show anomalies when, for instance, CSS strings in style elements are exercised in combination with (X)HTML entities. Several of these behaviors can be used in the context of mutation-based XSS attacks, as the code example in Listing 10 shows. Listing 10: innerhtml-access to an element with encoded XHTML in CSS string values causes Java- Script execution <style >*{ font - family : ar<img src="test.jpg" onload="xss()"/>ial } </ style > <style >*{ font - family : ar<img src="test.jpg" onload="xss()"/>ial } </ style > Here-above, the browser automatically decodes the HTML entities hidden in the CSS string specifying the font family. By doing so, the parser must assume that the CSS string contains actual HTML. While in text/html neither a mutation nor any form or parser confusion leading to script execution would occur, in text/xhtml and various related MIME type rendering modes, a CSS style element is supposed to be capable of containing other markup elements. Thus, without leaving the context of the style element, the parser decides to equally consider the decoded img element hidden in the CSS string, evaluate it and thereby execute the JavaScript connected to the successful activation of the event handler. This problem is unique to the WebKit browser family, although similar issues were spotted in other browser engines. Beware that despite a very small distribution of sites using MIME types such as text/xhtml, text/xml, application/xhtml+xml and application/ xml (0.0075% in the Alexa Top 1 Million website list), an attacker might abuse MIME sniffing, frame inheritance and other techniques to force a website into the necessary rendering mode, purposefully acting towards a successful exploit execution. The topic of security issues arising from MIME-sniffing has been covered by by Barth et al., Gebre et al. and others [2, 3, 8]. 3.7 Entity-Mutation in non-html context of HTML documents In-line SVG support provided in older browsers could lead to XSS attacks originating in HTML entities that were embedded inside style and similar elements, which are by default evaluated in their canonic form upon the occurrence of innerhtml-access. This problem has been reported and mitigated by the affected browser vendors and is listed here to further support our argument. The code example in Listing 11 showcases anatomy of this attack. Listing 11: Misusing HTML entities in inline-svg CSS-string properties to execute arbitrary Java- Script <p><svg >< style >*{ font - family : </style><img/src=x&tab; onerror=xss()// }</ style ></ svg > </p > <p><svg >< style >*{ font - family : </style></svg><img src="x" onerror="xss()" /> }</p> This vulnerability was present in a popular open-source user agent and has been since fixed successfully, following a bug report. 7 3.8 Summary In order to initiate the mutation, all of the exploits shown here require a single access to the innerhtml property of a surrounding container, while except for the attack vector discussed in Section 3.1, all other attacks can be upgraded to allow recursive mutation making double-, triple- and further multiply-encoded escapes and entities useful in the attack scenario, immediately when multiple innerhtmlaccess to the same element takes place. The attacks were successfully tested against a large range of publicly available web applications and XSS filters see Section ATTACK SURFACE The attacks outlined in this paper target the client-side web application components, e.g. JavaScript code, that use the innerhtml property to perform dynamic updates to the content of the page. Rich text editors, web clients, dynamic content management systems and components that pre-load resources constitute the examples of such features. In this section we detail the conditions under which a web application is vulnerable. Additionally, we attempt to estimate the prevalence of these conditions in web pages at present. The basic conditions for a mutation event to occur are the serialization and deserialization of data. As mentioned in Section 2, mutation in the serialization of the DOM-tree occurs when the innerhtml property of a DOM-node is accessed. Subsequently, when the mutated content is parsed back into a DOM-tree, e.g. when assigned to innerhtml or written to the document using document.write, the mutation is activated. The instances in Listing 12 are far from being the exclusive methods for a mutation event to occur, but they exemplify vulnerable code patterns. In order for an attacker to exploit such a mutation event, it must take place on the attackersupplied data. This condition makes it difficult to statistically estimate the number of vulnerable websites, however, the attack surface can be examined through an evaluation of the number of websites using such vulnerable code patterns. Listing 12: Code snippets vulnerable code patterns // Native JavaScript / DOM code a. innerhtml = b. innerhtml ; a. innerhtml += additional content ; a. insertadjacenthtml ( beforebegin, b. innerhtml ); document. write (a. innerhtml ); // Library code $( element ). html ( additional content ); 4.1 InnerHTML Usage Since an automated search for innerhtml does not determine the exploitability of its usage, it can only serve as an indication for the severity of the problem. To evaluate the prevalence of innerhtml usage on the web, we conducted a study of the Alexa top 10,000 most popular web sites. A large fraction of approximately one third of these web sites utilized vulnerable code patterns, like the ones in Listing 12, in their code for updating page content. Major websites like Google, Amazon, EBay and Microsoft could be identified among these. Again, this does not suggest that these web sites can be exploited. We found an overall of 74.5% of the Alexa Top 1000 websites to be using inner- HTML-assignments. While the usage of innerhtml is very common, the circumstances under which it is vulnerable to exploitation are in fact hard to quantify. Note though that almost all applications applied with an editable HTML area are prone to being vulnerable. Additionally, there are some notable examples of potentially vulnerable code patterns identifiable in multiple and commonly used JavaScript libraries, e.g. jquery [7] and SWFObject [27]. Indeed, more than 65% of the top 10,000 most popular websites do employ one of these popular libraries (with 48,87% using jquery), the code of which could be used to trigger actual attacks. Further studies have to be made as to whether or not web applications reliant on any of these libraries are affected, as it largely depends on how the libraries are used. In certain cases, a very specific set of actions needs to be performed if the vulnerable section of the code is to be reached. Regardless, library s inclusion always puts a given website at risk of attacks. Ultimately, we queried the Google Code Search Engine (GCSE) as well as the Github search tool to determine which libraries and public source files make use of potentially dangerous code patterns. The search query yielded an overall 184,000 positive samples using the GCSE and 1,196,000 positive samples using the Github search tool. While this does not provide us with an absolute number of vulnerable websites, it shows how widely the usage of innerhtml is distributed; any of these libraries using vulnerable code patterns in combination with user-generated content is likely to be vulnerable to mxss attacks. 4.2 Web-Mailers A class of web applications particularly vulnerable to m- XSS attacks are classic web-mailers applications that facilitates receiving, reading and managing HTML mails in a browser. In this example, the fact that HTML Rich-Text Editors (RTE) are usually involved, forms the basis for the use of the innerhtml property, which is being triggered with almost any interaction with the mail content. This includes composing, replying, spell-checking and other common features of applications of this kind. A special case of attack vector is sending an mxss string within the body of an HTML-formatted mail. We analyzed commonly used web-mail applications and spotted mxss vulnerabilities in almost every single one of them, including e.g. Microsoft Hotmail, Yahoo! Mail, Rediff Mail, OpenExchange, Roundcube, and many other products some of which cannot yet be named for the sake of user protection. The discovery was quickly followed with bug reports sent to the respective vendors, which were acknowledged. 4.3 Server-Side XSS Filters The class of mxss attacks poses a major challenge for server-side XSS filters. To completely mitigate these attacks, they would have to simulate the mutation effects of the three major browser families in hopes of determining whether a given string may be an mxss vector. At the same time, they should not filter benign content, in order not to break the web application. The fixes applied to HTML sanitizers, as mentioned in the introduction, are new rules for known mutation effects. It can be seen as a challenging task 8 to develop new filtering paradigms that may discover even unknown attack vectors. 5. MITIGATION TECHNIQUES The following sections will describe a set of mitigation techniques that can be applied by website owners, developers, or even users to protect against the cause and impact of mutation XSS attacks. We provide details on two approaches. The first one is based on a server-side filter, whereas the other focuses on client-side protection and employs an interception method in critical DOM properties access management. 5.1 Server-side mitigation Avoiding outputting server content otherwise incorrectly converted by the browsers is the most direct mitigation strategy. In specific terms, the flawed content should be replaced with semantically equivalent content which is converted properly. Let us underline that the belief stating that well-formed HTML is unambiguous is false: only a browser-dependent subset of well-formed HTML will be preserved across innerhtml-access and -transactions. A comprehensible and uncomplicated policy is to simply disallow any of the special characters for which browsers are known to have trouble with when it comes to a proper conversion. For many HTML attributes and CSS properties this is not a problem, since their set of allowed values already excludes these particular special characters. Unfortunately, in case of free-form content, such a policy may be too stringent. For HTML attributes, we can easily refine our directive by observing that ambiguity only occurs when the browser omits quotes from its serialized representation. Insertion of quotes can be guaranteed by, for example, appending a trailing whitespace to text, a change unlikely to modify the semantics of the original text. Indeed, the W3C specification states that user agents may ignore surrounding whitespace in attributes. A more aggressive transformation would only insert a space when the attribute was to be serialized without quotes, yet contained a backtick. It should be noted that backtick remains the only character which causes Internet Explorer to mis-parse the resulting HTML. For CSS, refining our policy is more difficult. Due to the improper conversion of escape sequences, we cannot allow any CSS special characters in general, even in their escaped form. For URLs in particular, parentheses and single quotes are valid characters in a URL, but are simultaneously considered special characters in CSS. Fortunately, most major web servers are ready to accept percent encoded versions of these characters as equivalent, so it is sufficient to utilize the common percent-escaping for these characters in URLs instead. We have implemented these mitigation strategies in HTML Purifier, a popular HTML filtering library [32]; as HTML Purifier does not implement any anomaly detection, the filter was fully vulnerable to these attacks. These fixes were reminiscent of similar security bugs that were tackled in 2010 [31] and subsequent releases in 2011 and In that case, the set of unambiguous encodings was smaller than that suggested by the specification, so a very delicate fix had to be crafted in result, both fixing the bug and still allowing the same level of expressiveness. Since browser behavior varies to a great degree, a server-side mitigation of this style is solely practical for the handling of a subset of HTML, which would normally be allowed for high-risk user-submitted content. Furthermore, this strategy cannot protect against dynamically generated content, a limitation which will be addressed in the next section. Note that problems such as the backtick-mutation still affect the HTML Purifier as well as Blueprint and Google Caja; they have only just been addressed successfully by the OWASP Java HTML Sanitizer Project Client-side mitigation Browsers implementing ECMA Script 5 and higher offer an interface for another client-side fix. The approach makes use of the developer-granted possibility to overwrite the handlers of innerhtml and outerhtml-access to intercept the performance optimization and, consequently, the markup mutation process as well. Instead of permitting a browser to employ its own proprietary HTML optimization routines, we utilize the internal XML processor a browser provides via DOM. The technique describing the wrapping and sanitation process has been labeled TrueHTML. The TrueHTML relies on the XMLSerializer DOM object provided by all of the user agents tested. The XMLSerializer can be used to perform several operations on XML documents and strings. What is interesting for our specific case is that XMLSerializer.serializeToString() will accept an arbitrary DOM structure or node collection and transform it into an XML string. We decided to replace the inner- HTML-getters with an interceptor to process the accessed contents as if they were actual XML. This has the following benefits: 1. The resulting string output is free from all mutations described and documented in Section 3. The attack surface can therefore be mitigated by a simple replacement of the browsers innerhtml-access logic with our own code. The code has been made available to a selected group of security researches in the field, who have been tasked with ensuring its robustness and reliability. 2. The XMLSerializer object is a browser component. Therefore, the performance impact is low compared to other methods of pre-processing or filtering inner- HTML-data before or after mutations take place. We elaborate on the specifics of the performance impact in the 6 Section. 3. The solution is transparent and does not require additional developer effort, coming down to a single Java- Script implementation. No existing JavaScript or DOM code needs to be modified, the script hooks silently into the necessary property accessors and replaces the insecure browser code. At present, the script works on all modern browsers tested (Internet Explorer, Firefox, Opera and Chrome) and can be extended to work on Internet Explorer 6 or earlier versions. 4. The XMLSerializer object post-validates potentially invalid code and thereby provides yet another level of sanitation. That means that even insecure or non-wellformed user-input can be filtered and kept free from mutation XSS and similar attack vectors. 4 OWASP Wiki, OWASP_Java_HTML_Sanitizer_Project, Feb. 2013 9 5. The TrueHTML approach is generic, transparent and website-agnostic. This means that a user can utilize this script as a core for a protective browser extension, or apply the user-script to globally protect herself against cause and impact of mutation XSS attacks. 6. EVALUATION This section is dedicated to description of settings and dataset used for evaluating the performance penalty introduced by TrueHTML. We focus on assessing the client-side mitigation approach. While HTMLPurifier has been changed to reflect determination for mitigating this class of attacks, the new features are limited to adding items on the internal list of disallowed character combinations. This does not measurably increase the overhead introduced by HTMLPurifier. Performance takes a central stage as a focus of our query, as the transfer overhead introduced by TrueHTML is exceptionally low. The http archive 5 has analysed a set of more than 290,000 URLs and over the course of this project it has been determined that the average transfer size of a single web page is more than 1,200 kilobyte, 52kB of which are taken up by HTML content and 214kB by JavaScript. The prototype of TrueHTML is implemented in only 820 byte of code, which we consider to be a negligible transfer overhead. 6.1 Evaluation Environment To assess the overhead introduced by TrueHTML in a realistic scenario, we conducted an evaluation based on the Alexa top 10,000 most popular web sites. We crawled these sites with a recursion depth of one. As pointed out in Section 4, approximately one third of these sites make use of innerhtml. In a next step we determine the performance impact of TrueHTML in a web browser by accessing 5,000 URLs randomly chosen from this set. Additionally, we assess the performance of TrueHTML in typical useage scenarios, like displaying an in a web mailer or accessing popular websites, as well as, investigate the relation between page load time overhead and page size in a controlled environment. To demonstrate the versatility of the client-side mitigation approach, we used different hardware platforms for the different parts of the evaluation. The Alexa traffic ranking data on virtual machines constituted the grounds for performing this evaluation. Each instance was assigned one core of an Intel Xeon X5650 CPU running at 2.67GHz and had access to 2 GB RAM. The instances ran Ubuntu Desktop and Mozilla Firefox As an example for a mid-range system, we used a laptop with an Intel Core2Duo CPU at 1.86GHz and 2GB RAM, running Ubuntu Desktop and Mozilla Firefox , so that to assess the performance in typical usage scenarios. The evaluation environment is completed by a proxy server to inject TrueHTML into the HTML context of the visited pages, and a logging infrastructure.once a website has been successfully loaded in the browser, we log the URL and the user-perceived page loading time using the Navigation Timing API defined by the W3C Web Performance Working Group [29]. We measure this time as the difference between the time when the onload event is fired and the time immediately after the user agent finishes prompt- 5 Nov ing to unload the previous document, as provided by the performance.timing.navigationstart method. 6.2 Evaluation Results Using the virtual machines we first determine the userperceived page loading time of the unaltered pages. In a second run we use the proxy server to inject TrueHTML and measure the page loading time again. We calculate the overhead as the increase of page loading time in percentage ratios of the loading time the page needed without True- HTML. The minimum overhead introduced by TrueHTML is 0.01% while the maximum is 99.94%. On average, True- HTML introduces an overhead of 30.62%. The median result is 25.73%, the 90th percentile of the overhead is 68.37%. However, the significance of these results is limited as we are unable to control for network-induced delay. In order to eliminate these effects, we conducted the following experiments locally. Using the laptop, we determined how the user experience is affected by TrueHTML in typical scenarios, like using a web mailer or browsing popular webpages. We therefore assigned document.body.innerhtml of an otherwise empty DOM to the content of a typical body of a multipart message (consisting of both the content types text/- plain and text/html), the scraped content of the landing pages of google.com, yahoo.com, baidu.com, duckduckgo. com, youtube.com, and the scraped content of a map display on Google Maps, as well as of a Facebook profile and a Twitter timeline. Each generated page was accessed three times and the load times logged per criteria described earlier on. The data were generated locally, thus the results do not contain network-induced delays. Table 2 shows the average values. The results of the previous test show that the user-perceived page load time is not only dependent on the size of the content, but also reliant on the structure and type of the markup. While the data show that in no case the user experience is negatively affected in the typical use cases, this kind of evaluation does not offer a generic insight into how True- HTML performance overhead relates to content size and the amount of markup elements. To evaluate this in a controlled environment, we generate a single <p></p> markup fragment that contains 1kB of text. Again, we assigned document.body.innerhtml of an otherwise empty DOM this markup element between one and one hundred times, creating pages containing one element with 1kB text content, scaling up to pages containing one thousand with 1000kB of text content. As before, the data was generated locally. We compare page load times with and without TrueHTML as described above. While the load time increases slightly with size and the amount of markup elements, it can be seen from Figure 2 that the performance penalty introduced through TrueHTML does not raise significantly. 7. RELATED WORK XSS. First reported back in the year 2000 [6], Cross-Site Scripting (XSS) attacks gained recognition and attention from a larger audience with the Samy MySpace worm in 2005 [17]. Several types of XSS attacks have been described thus far. 10 Figure 2: Page load time plotted against page size/#markup elements Content Size w/o TH w/ TH DuckDuckGo 8.2 kb 336 ms 361 ms Body 8.5 kb 316 ms 349 ms Baidu.com 11 kb 336 ms 466 ms Facebook profile 58 kb 539 ms 520 ms Google 111 kb 533 ms 577 ms Youtube 174 kb 1216 ms 1346 ms Twitter timeline 190 kb 1133 ms 1164 ms Yahoo 244 kb 893 ms 937 ms Google Maps 299 kb 756 ms 782 ms Table 2: User-perceived page load times ordered by content size with and without TrueHTML (TH) Reflected XSS, which typically present a user with an HTML document accessed with maliciously manipulated parameters (GET, HTTP header, cookies). These parameters are sent to the server for application logic processing and the document is then rendered along with the injected content. Stored XSS, which is injected into web pages through usercontributed content stored on the server. Without proper processing on the server-side, scripts will be executed for any user that visits a web page with this content. DOM XSS, or XSS of the third kind, which was first described by Klein [18]. It may be approached as a type of reflected XSS attack where the processing is done by a Java- Script library within the browser rather than on the server. If the malicious script is placed in the hash part of the URL, it is not even sent to the server, meaning that server-side protection techniques fail in that instance. Server-side mitigation techniques range from simple character encoding or replacement, to a full rewriting of the HTML code. The advent of DOM XSS was one of the main reasons for introducing XSS filters on the client-side. The IE8 XSS Filter was the first fully integrated solution [25], timely followed by the Chrome XSS Auditor in 2009 [4]. For Firefox, client-side XSS filtering is implemented through the NoScript extension 6. XSS attack mitigation has been covered in a wide range of publications [5, 8, 9, 16, 26, 35]. Noncespaces [10] use randomized XML namespace prefixes as a XSS mitigation technique, which would make detec- 6 mxss is mostly not in scope for these, thus remains undetected tion of injected content reliable. DSI [23] tries to achieve the same goal based on a classification of HTML content into trusted and untrusted content on the server side, subsequently changing browser parsing behavior to take this distinction into account. Blueprint [21] generates a model of the user input on the server-side and transfers this model, together with the user-contributed content, to the browser; browser behavior is modified by injecting a Javascript library to process the model along with the input. While the method to implement Blueprint in current browsers is remarkably similar to our mitigation approach, it seems hard to exclude the mxss string from the model as it looks like legitimate content. mxss attacks are likely to bypass all three of those defensive techniques given that the browser itself is instrumented to create the attack payload from originally benign-looking markup. Mutation-based Attacks. Weinberger et al. [30] give an example where innerhtml is used to execute a DOM-based XSS; this is a different kind of attack than those described in this paper, because no mutations are imposed on the content, and the content did not pass the server-side filter. Comparable XSS attacks based on changes to the HTML markup have been initially described for client-side XSS filters. Vela Nava et al. [24] and Bates et al. [4] have shown that the IE8 XSS Filter could once be used to weaponize harmless strings and turn them into valid XSS attack vectors by applying a mutation carried out by the regular expressions used by the XSS Filter, thus circumventing serverside protection. Zalewski covers concatenation problems based on NUL strings in innerhtml assignments in the Browser Security Handbook [33] and later dedicates a section to backtick mutation in his book The Tangled Web [34]. Other mutation-based attacks have been reported by Barth et al. [1] and Heiderich [13]. Here, dangers associated with sanitization of content [15] and claim that they were able, for each of a large number of XSS vectors, to produce a string that would result in that valid XSS vector after sanitization. The vulnerabilities described by Kolbitsch et al. may form the basis for an extremely targeted attack by web 11 malware [19]. Those authors state that attack vectors may be prepared for taking into account the mutation behavior of different browser engines. Further, our work can be seen as another justification of the statement from Louw et al. [22]: The main obstacle a web application must overcome when implementing XSS defenses is the divide between its understanding of the web content represented by an HTML sequence and the understanding web browsers will have of the same. We show that there is yet another data processing layer in the browser, which managed to remain unknown to the web application up till now. Note that our tests showed that Blueprint would have to be modified to be able to handle prevention of mxss attacks. The current status of standardization can be retrieved from [20]. Aside from the aforementioned print preview problem referenced in Section 2.2, another early report on XSS vulnerabilities connected to innerhtml was filed in 2010 for WebKit browsers by Vela Nava [28]. Further contributions to this problem scope have been submitted by Silin, Hasegawa and others, being subsequently documented on the HTML5 Security Cheatsheet [12]. 8. CONCLUSION The paper describes a novel attack technique based on a problematic and mostly undocumented browser behavior that has been in existence for more than ten years initially introduced with Internet Explorer 4 and adopted by other browser vendors afterwards. It identifies the attacks enabled by this behavior and delivers an easily implementable solution and protection for web application developers and siteowners. The discussed browser behavior results in a widely usable technique for conducting XSS attacks against applications otherwise immune to HTML and JavaScript injections. These internal browser features transparently convert benign markup, so that it becomes an XSS attack vector once certain DOM properties such as innerhtml and outerhtml are being accessed or other DOM operations are being performed. As we label this kind of attack Mutationbased XSS (mxss), we dedicate this paper to thoroughly introducing and discussing this very attack. Subsequently, we analyze the attack surface and propose an action plan for mitigating the dangers via several measurements and strategies for web applications, browsers and users. We also supply research-derived evaluations of the feasibility and practicability of the proposed mitigation techniques. The insight gained from this publication indicates the prevalence of risks and threats caused by the multilayer approach that the web is being designed with. Defensive tools and libraries must gain awareness of the additional processing layers that browsers possess. While server- as well as client-side XSS filters have become highly skilled protection tools to cover and mitigate various attack scenarios, mxss attacks pose a problem that has yet to be overcome by the majority of the existing implementations. A string mutation occurring during the communication between the single layers of the communication stack from browser to web application and back is highly problematic. Given its place and time of occurrence, it cannot be predicted without detailed case analysis. 9. REFERENCES [1] A. Barth. Bug 29278: XSSAuditor bypasses from sla.ckers.org. [2] A. Barth, J. Caballero, and D. Song. Secure content sniffing for web browsers, or how to stop papers from reviewing themselves. In Security and Privacy, th IEEE Symposium on, pages IEEE, [3] A. Barua, H. Shahriar, and M. Zulkernine. Server side detection of content sniffing attacks. In Software Reliability Engineering (ISSRE), 2011 IEEE 22nd International Symposium on, pages IEEE, [4] D. Bates, A. Barth, and C. Jackson. Regular expressions considered harmful in client-side XSS filters. In Proceedings of the 19th international conference on World wide web, WWW 10, pages , [5] P. Bisht and V. N. Venkatakrishnan. XSS-GUARD: Precise Dynamic Prevention of Cross-Site Scripting Attacks. In Conference on Detection of Intrusions and Malware & Vulnerability Assessment, [6] CERT.org. CERT Advisory CA Malicious HTML Tags Embedded in Client Web Requests [7] T. j. Foundation. jquery: The Write Less, Do More, JavaScript Library. Nov [8] M. Gebre, K. Lhee, and M. Hong. A robust defense against content-sniffing xss attacks. In Digital Content, Multimedia Technology and its Applications (IDC), th International Conference on, pages IEEE, [9] B. Gourdin, C. Soman, H. Bojinov, and E. Bursztein. Toward secure embedded web interfaces. In Proceedings of the Usenix Security Symposium, [10] M. V. Gundy and H. Chen. Noncespaces: Using randomization to defeat Cross-Site Scripting attacks. Computers & Security, 31(4): , [11] Y. Hasegawa, Mar [12] M. Heiderich. HTML5 Security Cheatsheet. [13] M. Heiderich. Towards Elimination of XSS Attacks with a Trusted and Capability Controlled DOM. PhD thesis, Ruhr-University Bochum, [14] M. Heiderich, M. Niemietz, F. Schuster, T. Holz, and J. Schwenk. Scriptless attacks stealing the pie without touching the sill. In ACM Conference on Computer and Communications Security (CCS), [15] P. Hooimeijer, B. Livshits, D. Molnar, P. Saxena, and M. Veanes. Fast and precise sanitizer analysis with bek. In Proceedings of the 20th USENIX conference on Security, SEC 11, pages 1 1, Berkeley, CA, USA, USENIX Association. [16] M. Johns. Code Injection Vulnerabilities in Web Applications - Exemplified at Cross-site Scripting. PhD thesis, University of Passau, Passau, July [17] S. Kamkar. Technical explanation of The MySpace Worm. [18] A. Klein. DOM Based Cross Site Scripting or XSS of the Third Kind. Web Application Security Consortium, 2005. 12 [19] C. Kolbitsch, B. Livshits, B. Zorn, and C. Seifert. Rozzle: De-Cloaking Internet Malware. In Proc. IEEE Symposium on Security & Privacy, [20] T. Leithead. Dom parsing and serialization (w3c editor s draft 07 november 2012). org/hg/innerhtml/raw-file/tip/index.html. [21] M. T. Louw and V. N. Venkatakrishnan. Blueprint: Robust Prevention of Cross-site Scripting Attacks for Existing Browsers. In Proceedings of the th IEEE Symposium on Security and Privacy, SP 09, pages , Washington, DC, USA, IEEE Computer Society. [22] M. T. Louw and V. N. Venkatakrishnan. Blueprint: Robust Prevention of Cross-site Scripting Attacks for Existing Browsers. Proc. IEEE Symposium on Security & Privacy, [23] Y. Nadji, P. Saxena, and D. Song. Document Structure Integrity: A Robust Basis for Cross-site Scripting Defense. In NDSS. The Internet Society, [24] E. V. Nava and D. Lindsay. Abusing Internet Explorer 8 s XSS Filters. http: //p42.us/ie8xss/abusing_ie8s_xss_filters.pdf. [25] D. Ross. IE8 XSS Filter design philosophy in-depth. 03/ie8-xss-filter-design-philosophy-in-depth. aspx, Apr [26] P. Saxena, D. Molnar, and B. Livshits. SCRIPTGARD: Automatic context-sensitive sanitization for large-scale legacy web applications. In Proceedings of the 18th ACM conference on Computer and communications security, pages ACM, [27] B. van der Sluis. swfobject - SWFObject is an easy-to-use and standards-friendly method to embed Flash content, which utilizes one small JavaScript file. [28] E. Vela. Issue 43902: innerhtml decompilation issues in textarea. issues/detail?id= [29] W3C. Navigation Timing /PR-navigation-timing /, July [30] J. Weinberger, P. Saxena, D. Akhawe, M. Finifter, E. C. R. Shin, and D. Song. A systematic analysis of xss sanitization in web application frameworks. In ESORICS, [31] E. Z. Yang. HTML Purifier CSS quoting full disclosure. Sept [32] E. Z. Yang. HTML Purifier. Mar [33] M. Zalewski. Browser Security Handbook. July [34] M. Zalewski. The Tangled Web: A Guide to Securing Modern Web Applications. No Starch Press, [35] G. Zuchlinski. The Anatomy of Cross Site Scripting. Hitchhiker s World, 8, Nov mxss Attacks: Attacking well-secured Web-Applications by using innerhtml Mutations mxss Attacks: Attacking well-secured Web-Applications by using innerhtml Mutations Mario Heiderich Tilman Frosch Jörg Schwenk Jonas Magazinius Edward Z. Yang Abstract. Back in 2007, Hasegawa discovered Revisiting XSS Sanitization Revisiting XSS Sanitization Ashar Javed Chair for Network and Data Security Horst Görtz Institute for IT-Security, Ruhr-University Bochum ashar.javed@rub.de Abstract. Cross-Site Scripting (XSS) around A Tale of the Weaknesses of Current Client-Side XSS Filtering Call To Arms: A Tale of the Weaknesses of Current Client-Side XSS Filtering Martin Johns, Ben Stock, Sebastian Lekies About us Martin Johns, Ben Stock, Sebastian Lekies Security Researchers at SAP, Uni Security Research Advisory IBM inotes 9 Active Content Filtering Bypass Security Research Advisory IBM inotes 9 Active Content Filtering Bypass Table of Contents SUMMARY 3 VULNERABILITY DETAILS 3 TECHNICAL DETAILS 4 LEGAL NOTICES 7 Active Content Filtering Bypass Advisory Blackbox Reversing of XSS Filters Blackbox Reversing of XSS Filters Alexander Sotirov alex@sotirov.net Introduction Web applications are the future Reversing web apps blackbox reversing very different environment and tools Cross-site scripting Client vs. Server Implementations of Mitigating XSS Security Threats on Web Applications Journal of Basic and Applied Engineering Research pp. 50-54 Krishi Sanskriti Publications Client vs. Server Implementations of Mitigating XSS Security Threats Web-Application Security Web-Application Security Kristian Beilke Arbeitsgruppe Sichere Identität Fachbereich Mathematik und Informatik Freie Universität Berlin 29. Juni 2011 Overview Web Applications SQL Injection XSS Bad Practice. Application Security Web Application Security John Zaharopoulos ITS - Security 10/9/2012 1 Web App Security Trends Web 2.0 Dynamic Webpages Growth of Ajax / Client side Javascript Hardening of OSes Secure by default Auto-patching. Intrusion detection for web applications Intrusion detection for web applications Intrusion detection for web applications Łukasz Pilorz Application Security Team, Allegro.pl Reasons for using IDS solutions known weaknesses and vulnerabilities BLUEPRINT: Robust Prevention of Cross-site Scripting Attacks for Existing Browsers. Presented by: Victor Parece BLUEPRINT: Robust Prevention of Cross-site Scripting Attacks for Existing Browsers Mike Ter Louw V.N. Venkatakrishnan Presented by: Victor Parece Cross-site Scripting (XSS) Injection of untrusted) Abusing Internet Explorer 8's XSS Filters Abusing Internet Explorer 8's XSS Filters by Eduardo Vela Nava (, sird@rckc.at) David Lindsay (,) Summary Internet Explorer The Image that called me The Image that called me Active Content Injection with SVG Files A presentation by Mario Heiderich, 2011 Introduction Mario Heiderich Researcher and PhD student at the Ruhr- University, Bochum Security RFC violation Violation trigger event Attack type WAF Explanations RFC violations Table A.1 RFC violations RFC violation Violation trigger event Attack type Cookie not RFCcompliant Evasion technique HTTP protocol compliance failed The cookie header in Bypassing XSS Auditor: Taking Advantage of Badly Written PHP Code Bypassing XSS Auditor: Taking Advantage of Badly Written PHP Code Anastasios Stasinopoulos, Christoforos Ntantogian, Christos Xenakis Department of Digital Systems, University of Piraeus {stasinopoulos, The innerhtml Apocalypse The innerhtml Apocalypse How mxss attacks change everything we believed to know so far A presentation by Mario Heiderich mario@cure53.de @0x6D6172696F Our Fellow Messenger Dr.-Ing. Mario Heiderich Researcher Precise client-side protection against DOM-based Cross-Site Scripting Precise client-side protection against DOM-based Cross-Site Scripting Ben Stock FAU Erlangen-Nuremberg ben.stock@cs.fau.de Patrick Spiegel SAP AG patrick.spiegel@sap.com Sebastian Lekies SAP AG sebastian.lekies@sap.com Project 2: Web Security Pitfalls EECS 388 September 19, 2014 Intro to Computer Security Project 2: Web Security Pitfalls Project 2: Web Security Pitfalls This project is due on Thursday, October 9 at 6 p.m. and counts for 8% of your course Application security testing: Protecting your application and data E-Book Application security testing: Protecting your application and data Application security testing is critical in ensuring your data and application is safe from security attack. This ebook offers Information Supplement: Requirement 6.6 Code Reviews and Application Firewalls Clarified Standard: Data Security Standard (DSS) Requirement: 6.6 Date: February 2008 Information Supplement: Requirement 6.6 Code Reviews and Application Firewalls Clarified Release date: 2008-04-15 General PCI Cross-Site Scripting Cross-Site Scripting (XSS) Computer and Network Security Seminar Fabrice Bodmer (fabrice.bodmer@unifr.ch) UNIFR - Winter Semester 2006-2007 XSS: Table of contents What is Cross-Site Scripting (XSS)? Some Webapps Vulnerability Report Tuesday, May 1, 2012 Webapps Vulnerability Report Introduction This report provides detailed information of every vulnerability that was found and successfully exploited by CORE Impact Professional HTML5/CSS3/JavaScript Programming HTML5/CSS3/JavaScript Programming Description: Prerequisites: Audience: Length: This class is designed for students that have experience with basic HTML concepts that wish to learn about HTML Version 5, What is Web Security? Motivation brucker@inf.ethz.ch Information Security ETH Zürich Zürich, Switzerland Information Security Fundamentals March 23, 2004 The End Users View The Server Providers View What is Web Universal XSS via IE8s XSS Filters Universal XSS via IE8s XSS Filters the sordid tale of a wayward hash sign slides: About Us Eduardo Vela Nava aka sirdarckcat, Next Generation Clickjacking Next Generation Clickjacking New attacks against framed web pages Black Hat Europe, 14 th April 2010 Paul Stone paul.stone@contextis.co.uk Coming Up Quick Introduction to Clickjacking Four New Cross-Browser Where every interaction matters. Where every interaction matters. Peer 1 Vigilant Web Application Firewall Powered by Alert Logic The Open Web Application Security Project (OWASP) Top Ten Web Security Risks and Countermeasures White Paper Cross Site Scripting and their Counter-measures Cross Site Scripting and their Counter-measures Cross Site Scripting Cross Site Scripting (XSS) is an injection based web attack where a malicious user (hacker/attacker) injects client-side script in a 10CS73:Web Programming 10CS73:Web Programming Question Bank Fundamentals of Web: 1.What is WWW? 2. What are domain names? Explain domain name conversion with diagram 3.What are the difference between web browser and web server LASTLINE WHITEPAPER. Large-Scale Detection of Malicious Web Pages LASTLINE WHITEPAPER Large-Scale Detection of Malicious Web Pages Abstract Malicious web pages that host drive-by-download exploits have become a popular means for compromising hosts on the Internet and, Check list for web developers Check list for web developers Requirement Yes No Remarks 1. Input Validation 1.1) Have you done input validation for all the user inputs using white listing and/or sanitization? 1.2) Does the input validation 1 Web Application Firewalls implementations, common problems and vulnerabilities Bypassing Web Application Firewalls Pavol Lupták Pavol.Luptak@nethemba.com CEO, Nethemba s.r.o Abstract The goal of the presentation is to describe typical obfuscation attacks that allow an attacker to WIRIS quizzes web services Getting started with PHP and Java WIRIS quizzes web services Getting started with PHP and Java Document Release: 1.3 2011 march, Maths for More Summary This document provides client examples for PHP and Java. Contents WIRIS CROSS-SITE SCRIPTING (XSS) ATTACKS CROSS-SITE SCRIPTING (XSS) ATTACKS Abu Khleif & Haitham Topics In security, Nov 28, 2016 CONTENTS Overview Types of XSS Vulnerabilities How to Determine If You Are Vulnerable? How to Protect Your Website?... Are AJAX Applications Vulnerable to Hack Attacks? Are AJAX Applications Vulnerable to Hack Attacks? The importance of Securing AJAX Web Applications This paper reviews AJAX technologies with specific reference to JavaScript and briefly documents the kinds SQL Injection January 23, 2013 Web-based Attack: SQL Injection SQL Injection January 23, 2013 Authored By: Stephanie Reetz, SOC Analyst Contents Introduction Introduction...1 Web applications are everywhere on the Internet. Almost Overview...2 Application Security Testing. Indian Computer Emergency Response Team (CERT-In) Application Security Testing Indian Computer Emergency Response Team (CERT-In) OWASP Top 10 Place to start for learning about application security risks. Periodically updated What is OWASP? Open Web Application Complete Cross-site Scripting Walkthrough Complete Cross-site Scripting Walkthrough Author : Ahmed Elhady Mohamed Email : ahmed.elhady.mohamed@gmail.com website: blog : [+] Introduction wikipedia Security Test s i t ng Eileen Donlon CMSC 737 Spring 2008 Security Testing Eileen Donlon CMSC 737 Spring 2008 Testing for Security Functional tests Testing that role based security functions correctly Vulnerability scanning and penetration tests Testing whether LabVIEW Internet Toolkit User Guide LabVIEW Internet Toolkit User Guide Version 6.0 Contents The LabVIEW Internet Toolkit provides you with the ability to incorporate Internet capabilities into VIs. You can use LabVIEW to work with XML documents, ICE Trade Vault. Public User & Technology Guide June 6, 2014 ICE Trade Vault Public User & Technology Guide June 6, 2014 This material may not be reproduced or redistributed in whole or in part without the express, prior written consent of IntercontinentalExchange, Protection, Usability and Improvements in Reflected XSS Filters Protection, Usability and Improvements in Reflected XSS Filters Riccardo Pelizzi System Security Lab Department of Computer Science Stony Brook University May 2, 2012 1 / 19 Riccardo Pelizzi Improvements ADOBE AIR HTML Security ADOBE AIR HTML Security Legal notices Legal notices For legal notices, see. iii Contents The security challenges of RIAs......................................................................................... A Study on Dynamic Detection of Web Application Vulnerabilities A Study on Dynamic Detection of Web Application Vulnerabilities A Dissertation Presented by Yuji Kosuga Submitted to the School of Science for Open and Environmental Systems in partial fulfillment of the XSS PROTECTION CHEATSHEET FOR DEVELOPERS V1.0. Author of OWASP Xenotix XSS Exploit Framework opensecurity.in THE ULTIMATE XSS PROTECTION CHEATSHEET FOR DEVELOPERS V1.0 Ajin Abraham Author of OWASP Xenotix XSS Exploit Framework opensecurity.in The quick guide for developers to protect their web applications from Network Security Web Security Network Security Web Security Anna Sperotto, Ramin Sadre Design and Analysis of Communication Systems Group University of Twente, 2012 Cross Site Scripting Cross Side Scripting (XSS) XSS is a case of (HTML) Sitefinity Security and Best Practices Sitefinity Security and Best Practices Table of Contents Overview The Ten Most Critical Web Application Security Risks Injection Cross-Site-Scripting (XSS) Broken Authentication and Session Management Web Development I & II* Web Development I & II* Career Cluster Information Technology Course Code 10161 Prerequisite(s) Computer Applications Introduction to Information Technology (recommended) Computer Information Technology elearning for Secure Application Development elearning for Secure Application Development Curriculum Application Security Awareness Series 1-2 Secure Software Development Series 2-8 Secure Architectures and Threat Modeling Series 9 Application Security Malicious Yahooligans WHITE PAPER: SYMANTEC SECURITY RESPONSE Malicious Yahooligans Eric Chien Symantec Security Response, Ireland Originally published by Virus Bulletin, August 2006. Copyright held by Virus Bulletin, Ltd., Threat Modeling/ Security Testing. Tarun Banga, Adobe 1. Agenda Threat Modeling/ Security Testing Presented by: Tarun Banga Sr. Manager Quality Engineering, Adobe Quality Leader (India) Adobe Systems India Pvt. Ltd. Agenda Security Principles Why Security Testing Security StruxureWare Data Center Expert 7.2.1 Release Notes StruxureWare Data Center Expert 7.2.1 Release Notes Table of Contents Page # Part Numbers Affected...... 1 Minimum System Requirements... 1 New Features........ 1 Issues Fixed....2 Known Issues...2 Upgrade External Vulnerability Assessment. -Technical Summary- ABC ORGANIZATION External Vulnerability Assessment -Technical Summary- Prepared for: ABC ORGANIZATI On March 9, 2008 Prepared by: AOS Security Solutions 1 of 13 Table of Contents Executive Summary... 3 Discovered Security Agenda. SQL injection review XSS attacks 1/21 Agenda SQL injection review XSS attacks 1/21 Excerpt from the Debate - Closely relevant to this class "Mine were words, and his was action." "You would be in jail." "When they go low, we go high." "She WEB DEVELOPMENT IA & IB (893 & 894) DESCRIPTION Web Development is a course designed to guide students in a project-based environment in the development of up-to-date concepts and skills that are used in the development of today s websites. Security features of ZK Framework 1 Security features of ZK Framework This document provides a brief overview of security concerns related to JavaScript powered enterprise web application in general and how ZK built-in features secures ) EVALUATING COMMERCIAL WEB APPLICATION SECURITY. By Aaron Parke EVALUATING COMMERCIAL WEB APPLICATION SECURITY By Aaron Parke Outline Project background What and why? Targeted sites Testing process Burp s findings Technical talk My findings and thoughts Questions Recommended Practice Case Study: Cross-Site Scripting. February 2007 Recommended Practice Case Study: Cross-Site Scripting February 2007 iii ACKNOWLEDGEMENT This document was developed for the U.S. Department of Homeland Security to provide guidance for control system cyber HP WebInspect Tutorial HP WebInspect Tutorial Introduction: With the exponential increase in internet usage, companies around the world are now obsessed about having a web application of their own which would provide... XML Programming. Duration: 5 Days Price: $2595 *California residents and government employees call for pricing. XML Programming Duration: 5 Days Price: $2595 *California residents and government employees call for pricing. Course Description: The extensible Markup Language (XML) provides a standard, document-based StruxureWare Data Center Expert 7.2.4 Release Notes StruxureWare Data Center Expert 7.2.4 Release Notes Table of Contents Page # Part Numbers Affected...... 1 Minimum System Requirements... 1 New Features........ 1 Issues Fixed....3 Known Issues...3 Upgrade CHAPTER 2 XML PROCESSING 10 CHAPTER 2 XML PROCESSING This chapter describes the XML syntax, XML schema description languages, validating XML, query processing on XML etc. 2.1 XML SYNTAX XML is a technology for creating markup Addressing Mobile Load Testing Challenges. A Neotys White Paper Addressing Mobile Load Testing Challenges A Neotys White Paper Contents Introduction... 3 Mobile load testing basics... 3 Recording mobile load testing scenarios... 4 Recording tests for native apps... Towards Automated Malicious Code Detection and Removal on the Web Towards Automated Malicious Code Detection and Removal on the Web Dabirsiaghi, Arshan Open Web Application Security Project Aspect Security, Inc., 2007 Abstract The most common vulnerability in web applications
http://docplayer.net/339520-Mxss-attacks-attacking-well-secured-web-applications-by-using-innerhtml-mutations.html
CC-MAIN-2017-39
en
refinedweb
. Set-Up on Systems with Visual Studio 2010 SP1: - cd C:\Program Files (x86)\Microsoft F#\v4.0 - copy fsi.exe fsi64.exe - corflags /32bit- /Force fsi64.exe Open Visual Studio - From the Tools menu select Options, F# Tools, and then edit the F# Interactive Path. - Set the path to C:\Program Files (x86)\Microsoft F#\v4.0\fsi64.exe. - Restart F# Interactive. Set-Up on Systems with Visual Studio 2012 Preview With Visual Studio 2012 preview, it is possible to use “Cloud Numerics” assemblies locally. However, because the Visual Studio project template is not available for VS 2012 preview, you’ll have to use following procedure to bypass the installation of the template: - Install “Cloud Numerics” from command prompt specifying “msiexec –i MicrosoftCloudNumerics.msi CLUSTERINSTALL=1” and follow the instructions displayed by the installer for any missing prerequisites. - To use F# Interactive, open Visual Studio. From the Tools menu select Tools, Options, F# Tools, F# Interactive, and set “64-bit F# Interactive” to True.. Using Cloud Numerics F# Extensions from Your Project To configure your F# project to use “Cloud Numerics” F# extensions: - Create F# Application in Visual Studio - From the Build menu, select Configuration Manager, and change the Platform attributes to x64 - In the Project menu, select the Properties <your-application-name> item. From your project’s application properties tab, ensure that the target .NET Framework is 4.0, not 4.0 Client Profile. - Add the CloudNumericsFSharpExtensions project to your Visual Studio solution - Add a project reference from your application to the CloudNumericsFSharpExtensions project - Add references to the “Cloud Numerics” managed assemblies. These assemblies are typically located in C:\Program Files\Microsoft Cloud Numerics\v0.2\Bin - If you plan to deploy your application to Windows Azure, right-click on reference for FSharp.Core, and select Properties. In the properties window, set Copy Local to True. - Finally, you might need to edit the following in your .fs source file. - The code within the #if INTERACTIVE … #endif block is required only if you’re planning to use F# Interactive. - Depending on where it is located on your file system and whether you’re using Release or Debug build, you might need to adjust the path specified to CloudNumericsFSharpExtensions. #I @"C:\Program Files\Microsoft Cloud Numerics\v0.2\Bin" #I @"..\..\..\CloudNumericsFSharpExtension.Distributed.IO" #endif open Microsoft.Numerics.FSharp open Microsoft.Numerics open Microsoft.Numerics.Mathematics open Microsoft.Numerics.Statistics open Microsoft.Numerics.LinearAlgebra NumericsRuntime.Initialize() Using Cloud Numerics from F# Interactive A simple way to use “Cloud Numerics” libraries from F# Interactive is to copy and send the previous piece of code to F# Interactive. Then, you will be able to use create arrays, call functions, and so forth, for example: > val x : Distributed.NumericDenseArray<float> > let y = ArrayMath.Sum(1.0/x);; > val y : float = 7.485470861 Note that when using F# Interactive, the code executes in serial fashion. However, parallel execution is straightforward as we’ll see next. Compiling and Deploying Applications open System open System.Collections.Generic #if INTERACTIVE #I @"C:\Program Files\Microsoft Cloud Numerics\v0.2\Bin" #I @"..\..\..\CloudNumericsFSharpExtensions.Signal" #r "Microsoft.Numerics.Distributed.IO" #endif open Microsoft.Numerics.FSharp open Microsoft.Numerics open Microsoft.Numerics.Mathematics open Microsoft.Numerics.Statistics open Microsoft.Numerics.LinearAlgebra open Microsoft.Numerics.Signal NumericsRuntime.Initialize() let x = DistDense.range 1.0 1000.0 let y = ArrayMath.Sum(1.0/x) printfn "%f" y You can use the same serial code in parallel case. We then run it using mpiexec to get the result: Finally, to deploy the application to Azure we’ll re-purpose the “Cloud Numerics” C# Solution template to get to the Deployment Utility: - Create a new “Cloud Numerics” C# Solution - Add your F# application project to the Solution - Add “Cloud Numerics” F# extensions to the Solution - Set AppConfigure as the Start-Up project - Build the Solution to get the “Cloud Numerics” Deployment Utility - Build the F# application - Use “Cloud Numerics” Deployment Utility to deploy a cluster - Use the “Cloud Numerics” Deployment Utility to submit a job. Instead of the default executable, select your F# application executable to be submitted Indexing and Assignment F# has an elegant syntax for operating on slices of arrays. With “Cloud Numerics” F# Extensions we can apply this syntax to distributed arrays, for example: let y = x.[1L..3L,*] x.[4L..6L,4L..6L] <- x.[7L..9L,7L..9L] Operator Overloading We supply operator overloads for matrix multiply as x *@ y and linear solve of a*x=b as let x = a /@ b . Also, operator overloads are available for element-wise operations on arrays: - Element-wise power: a.**b - Element-wise mod a.%b - Element-wise comparison: .= , .< , .<> and so forth. Convenience Type Definitions To enable a more concise syntax, we have added shortened definitions for the array classes as follows: - LNDA<’T> : Microsoft.Numerics.Local.DenseArray<’T> - DNDA<’T> : Microsoft.Numerics.Distributed.NumericDenseArray<’T> - LNSM<’T> : Microsoft.Numerics.Local.SparseMatrix<’T> - DNSM<’T> : Microsoft.Numerics.Distributed.SparseMatrix<’T> Array Building Finally, we provide several functions for building arrays, for example from F# sequences or by random sampling. They wrap the “Cloud Numerics” .NET APIs to for functional programming experience. The functions are within 4 modules: - LocalDense - LocalSparse - DistDense - DistSparse These modules include functions for building arrays of specific type, for example: let y = DistDense.range 10 100 let z = DistSparse.createFromTuples [(0L,3L,2.0);(1L,3L,3.0); (1L,2L, 5.0); (3L,0L,7.0)] Building and Running Example Projects The “Cloud Numerics” F# Extensions has a folder named “Examples” that holds three example .fs files, including: - A set of short examples that demonstrate how to invoke different library functions. - A latent semantic analysis example that demonstrates computation of correlations between different documents, in this case, SEC-10K filings of 30 Dow Jones companies. - A air traffic analysis example that demonstrates statistical analysis of flight arrival and delay data. The examples are part of a self-contained solution. To run them: - Copy the four .csv input data files to C:\Users\Public\Documents (you can use a different folder, but you will need to adjust the path string in the source code to correspond to this folder). - In Solution Explorer, select an example to run by moving the .fs files up and down. - Build the example project and run it using mpiexec as explained before..
https://blogs.msdn.microsoft.com/cloudnumerics/2012/08/20/cloud-numerics-f-extensions/
CC-MAIN-2017-39
en
refinedweb
Hi, I have created an application to send data from WCF server application to my client side application. For this I have added a custom serialization instead of the default serialization. And now its sending data to the client as stream and I am able to de-serialize the object back in client application. But now my function only work for specific type. For instance if I am returning Class of type Customer its serializing and de-serializing based on the class I am specifying. I need to write a generic class which will accept any type of object for serialization and de-serialization. How can I do this? Following is the piece of code that I want to generalize. public class CustomBodyWriter : BodyWriter { private IEnumerable<List<Customer>> Customers; public CustomBodyWriter(IEnumerable<List<Customer>> customers) : base(false) // False should be passed here to avoid buffering the message { this.Customers = customers; } protected override void OnWriteBodyContents(System.Xml.XmlDictionaryWriter writer) { XmlSerializer serializer = new XmlSerializer(typeof(List<Customer>)); writer.WriteStartElement("Customers"); foreach (List<Customer> customer in Customers) { serializer.Serialize(writer, part); } writer.WriteEndElement(); } } public IEnumerable<List<Customer>> GetAllCustomersImpl() { TestStreamingBL TestStreamingBL = new TestStreamingBL(); List<Customer> list = new List<Customer>(); int count = 1; foreach (Customer customer in TestStreamingBL.GetAllCustomers()) { list.Add(customer); if (count == 100) { count = 1; yield return list; list.Clear(); } count++; } yield break; } In above code when ever the "yield return list" execute OnWriteBodyContents will get called Any idea??? Anilal
https://www.daniweb.com/programming/software-development/threads/350433/help-for-creating-a-generic-method-for-custombodywrier-inherited-from-bodywriter
CC-MAIN-2017-39
en
refinedweb
In Scala, you can nest just about anything inside anything. You can define functions inside other functions, and classes inside other classes. Here is a simple example of the latter. (I follow this explanation in a hopefully more intuitive context.) import collection.mutable._ class Network { class Member(val name: String) { val contacts = new ArrayBuffer[Member] } private val members = new ArrayBuffer[Member] def join(name: String) = { val m = new Member(name) members += m m } } Of course, you can do the same in Java: import java.util.*; public class Network { public class Member { private String name; private ArrayList<Member> contacts = new ArrayList<>(); public Member(String name) { this.name = name; } public String getName() { return name; } public ArrayList<Member> getContacts() { return contacts; } } private ArrayList<Member> members = new ArrayList<>(); public Member join(String name) { Member m = new Member(name); members.add(m); return m; } } But there is a difference. In Scala, each instance has its own class Member, just like each instance has its own field members. Consider two networks. val chatter = new Network val myFace = new Network Now chatter.Member and myFace.Memberare different classes. In contrast, in Java, there is only one inner class Network.Chatter. The Scala approach is more regular. For example, to make a new inner object, you simply use new with the type name: val fred = new chatter.Member("Fred") In Java, you need to use a special syntax. Member fred = chatter.new Member("Fred"); And in Scala, the compiler can do useful type checking. In our network example, you can add a member within its own network, but not across networks. val fred = chatter.join("Fred") val wilma = chatter.join("Wilma") fred.contacts += wilma // Ok val barney = myFace.join("Barney") // Has type myFace.Memberfred.contacts += barney // No—can't add a myFace.Memberto a buffer of chatter.Memberelements For networks of people, this behavior probably makes sense. If you don't want it, there are two solutions. First, you can move the Member type somewhere else. A good place would be the Network companion object. object Network { class Member(val name: String) { val contacts = new ArrayBuffer[Member] } } class Network { private val members = new ArrayBuffer[Network.Member] ... } Companion objects are used throughout Scala for class-based features, so this is no surprise. Alternatively, you can use a type projection Network#Member, which means “a Member of any Network”. For example, class Network { class Member(val name: String) { val contacts = new ArrayBuffer[Network#Member] } ... } You would do that if you want the fine-grained “inner class per object” feature in some places of your program, but not everywhere. So, which language is more complex, Scala or Java? Except possibly for the Network#Member syntax, I think Scala wins hands-down. It is more regular, and it offers more functionality at the same time. It does that with a handful of basic principles, systematically applied. (Before you flame me, consider that ”less familiar” is not the same as “more complex”.) In contrast, with Java, you can see that inner classes were bolted onto an existing language. Did I mention that Java has restrictions on accessing local outer variables in local inner classes? And “static” inner classes? Don't get me going. It's half a chapter in Core Java. Scala doesn't have any of that.
https://community.oracle.com/blogs/cayhorstmann/2011/08/05/inner-classes-scala-and-java
CC-MAIN-2017-39
en
refinedweb
Difference between revisions of "Taking a screenshot" Revision as of 15:03, 23 August 2011 Contents import An easy way to take a screenshot of your curent system is using the import command: import -window root screenshot.jpg import is part of the Template:Package Official package. In case you only want to grab a single window you can use the xwininfo tool to find the window the active/focused window The following script takes a screenshot of the currently focused window. It'll Template:Package Official, available in the man page for more information..
https://wiki.archlinux.org/index.php?title=Taking_a_Screenshot&diff=153059&oldid=153001
CC-MAIN-2017-39
en
refinedweb
Sofa is a very modular framework and thus it extensively uses shared libraries. Differences between Windows and Linux/Mac when creating a shared library On Linux, every classes and every functions are automatically exported in the shared library, you don’t have to do anything. On Windows, the default behaviour is to not export anything implicitly. You must use the __declspec(dllexport) symbol to export classes and functions you need. And you must use __declspec(dllimport) to import classes and functions from a shared library. On Sofa, we use a specific macro for each library to silently import / export. This macro is define this way : SOFA_mylib_API and is automatically set as __declspec(dllexport) if we are inside the library "mylib" because we want to expose its class and function definitions. On the other side, if we are outside the library, for instance in an other lib or in an application, the same macro is automatically defined as __declspec(dllimport) because we want to import class and function definitions that we don’t know yet but we notify the compiler it will find them in one of its linker dependencies. In common Sofa libraries, the macros are all defined in the component.h file and for plugins, you will find a initMyPlugin.h setting this macro. In each file you want to use it you must include the corresponding file. Thus "component.h" if you are adding a component in the common Sofa libraries or the "initMyPlugin.h" if you are implementing a new component in a plugin. How to use the macro to import / export definitions For instance if we want to import / export definitions from our plugin "MyPlugin" : Example with a class MyClass.h: #include "initMyPlugin.h" // contains the definition of SOFA_MyPlugin_API class SOFA_MyPlugin_API MyClass // export the class if we are currently building MyPlugin, else if we are outside MyClass will be imported { ... }; Example with a generic class MyGenericClass.h: #include "initMyPlugin.h" template class MyGenericClass // we do not set the macro here since we are not defining a class but a generic class (a pattern) and definitions really exist only with a template instantiation { ... }; // here we notify the compiler it will find the template instantiation elsewhere #if defined(SOFA_EXTERN_TEMPLATE) && !defined(SOFA_MYGENERICCLASS_CPP) #ifndef SOFA_FLOAT extern template class SOFA_MyPlugin_API MyGenericClass < MyDoubleType >; #endif #ifndef SOFA_DOUBLE extern template class SOFA_MyPlugin_API MyGenericClass < MyFloatType >; #endif #endif MyGenericClass.cpp: #define SOFA_MYGENERICCLASS_CPP #include "MyGenericClass.h" // and here we explicitly instantiate the templated class #ifndef SOFA_FLOAT template class SOFA_MyPlugin_API MyGenericClass < MyDoubleType >; #endif #ifndef SOFA_DOUBLE template class SOFA_MyPlugin_API MyGenericClass < MyFloatType >; #endif For generic class where we cannot predict the kind of template instanciation the user will need, we do not use the import / export macro because we cannot instantiate the template class. The user will directly use the definitions from a .inl file that you have to provide. Example with a function #include "initMyPlugin.h" void SOFA_MyPlugin_API MyFunc() { ... }; You should not use the SOFA_MyPlugin_API macro for member functions, you just have to set the macro for the class owning the member functions and they will all be exported. Common mistakes If you are experiencing linking issues about dllimport the problem may come from an omission or a bad use of the macro SOFA_*_API. For instance if you copied a class from a library to another without editing its macro, its definitions will not be exported and worse the linker will expect to find them in a dependency although they are in the currently compiled library. Last modified: 4 July 2017
https://www.sofa-framework.org/community/doc/using-sofa/advanced-features/shared-libraries/
CC-MAIN-2017-39
en
refinedweb
span8 span4 span8 span4 I have a published python scripted parameter that is supposed to return the current date and time. The parameter is later used in the dataset field in a FeatureWriter at the end of the workspace. Testing the workspace in FME 2017.1 is fine, but running the workspace in 2018.1 doesn't work - FME EXE stops immediately after starting translation Here's the script used by the parameter from datetime import datetime return datetime.now().strftime('%Y-%m-%d_%I%M%p') I have the preferred python interpreter set to Esri ArcGIS Desktop Python (2.7) I'm very new to python so apologies if i'm missing something obvious. Thanks! What happens if you set the Python interpreter (in the Workspace navigator) to the regular Python 2.7 and not the one supplied by ArcGIS? Answers Answers and Comments 19 People are following this question. output folder name from input file using python scripted paramter 5 Answers Python Scripted Parameter Termination 3 Answers Does a python startup script execute before a python scripted parameter? 1 Answer Adding previous month name to output filename (python) 5 Answers How to define a parameter using another parameter? 2 Answers
https://knowledge.safe.com/questions/92630/simple-python-scripted-parameter-not-working-after.html
CC-MAIN-2019-43
en
refinedweb
I will walk through the steps to combine an ASP.NET MVC5 application with an App for Office to allow the app to authenticate using a Microsoft Account or using Facebook. Similar steps could also be followed to authenticate using Google. Step 1 (Create an App for Office) Using Visual Studio 2013 or Visual Studio 2012 with MVC5, create a new App for Office application. Note that the web application project which is automatically created and added to the solution includes an html page and the default Source Location of the app is that html page. We will change this later to reference our ASP.NET MVC5 web application instead. Step 2 (Create an ASP.NET MVC5 Application) Add a new ASP.NET MVC project to the solution and click the button to Change Authentication. By default authentication is set to No Authentication, so change that to Individual Accounts. Step 3 (Copy CSS and JavaScript assets to your MVC project) Copy the css files from the App and App\Home folders and paste into the Content folder of your MVC project. Copy the js files from the App and App\Home folders and paste into a folder within the Scripts folder of your MVC project. Copy the stylesheet references from the auto-generated Home.html into _Layout.cshtml in your MVC project. Copy the script references from Home.html into either _Layout.cshtml or the scripts section of Index.cshtml. Step 4 (Remove the HTML Web project) Delete the auto-generated Web project from your solution. Step 5 (Configure authentication and register with each provider) You will edit Startup.Auth.cs with the login providers you wish to support. The class that is created for you by MVC contains commented out code that can be filled in to automatically add support for each provider. This code makes use of OWIN middleware and requires a reference to each of the providers you plan to use. using Microsoft.Owin.Security.Facebook; using Microsoft.Owin.Security.Google; using Microsoft.Owin.Security.MicrosoftAccount; using Microsoft.Owin.Security.Twitter; In order to obtain the properties that this authentication code needs, you must first register your app with each of the authentication service providers. And this requires that you create a developer account for each one. MICROSOFT ACCOUNT: The Client ID and Client Secret will be listed under App Settings and should be copied and pasted into Startup.Auth.cs Startup.Auth.cs Uncomment the Microsoft Account section and enter the ClientId and ClientSecret. If your app requires access to particular properties (such as the email address in this code example), that field may be requested. Startup.Auth.cs Uncomment the Facebook Authentication section and enter the AppId and AppSecret. Again, this example shows the syntax to request the email address. AccountController.cs If your authentication code requests additional properties such as the email address, modify ExternalLoginCallback() to retrieve the claim attached to the identity. Step 6 (Update the App manifest file) YourAppForOffice.xml The Source location should contain the url for the root of your MVC web application. In addition you need to include the domains that provide the authentication in the app manifest. Step 7 (Enable SSL) Before testing your app you will need to make sure your web project has the SSL Enabled property set to True. And your App for Office will not be satisfied with a self-signed certificate. This problem is quickly solved by deploying your web app to Microsoft Azure. If you are using the *.azurewebsites.net domain assigned to your web site by Azure then your site is already secured by a certificate provided by Microsoft. You may access your free ASP.NET web sites through MSDN or here for those without an MSDN subscription. The Source location in the app manifest should now be updated to contain the url of your Azure web site.
https://blogs.msdn.microsoft.com/laurieatkinson/2014/06/03/using-oauth-in-an-app-for-office/
CC-MAIN-2017-34
en
refinedweb
First of all, my apologies for the vaguely formulated title. After hours of trying to understand and solve this problems (the last one in vain), I'm not capable of formulating the problem in one small sentence. Context I've built a system in which certain methods (we call them 'Actions') can be visually scripted when certain conditions are met, e.g. MoveToPosition. Since actions should be able to have sequential actions (triggered after the end of an action), the Action instances have a property NextActions, which is a List of IDs. Every time an action has ended, the event OnActionEnd is raised, which leads to the following method. public void OnActionEnd() { foreach(string actionID in nextActions) { ActionManager.InvokeAction(actionID); } } public class ScheduleAction { // ID of the Action public string actionID; // GameObject that holds the Component in which the // method that needs to be invoked resides public GameObject targetObject; // The name of the method that needs to be invoked public string method; // The list of parameters that will be passed to the // method that will be invoked public List<object> parameters; // List of Action IDs that need to be triggered after // the method of this action reached his end public List<string> nextActionIDs; // Flag indicating if the Action should be triggered or not // (could be false for debugging purposes) public bool activated; public void InvokeAction() { if (activated && !string.IsNullOrEmpty(method)) { string componentName = method.Substring(0, method.IndexOf("/")); string methodName = method.Substring((method.IndexOf("/") + 1)); if (!string.IsNullOrEmpty(componentName) && !string.IsNullOrEmpty(methodName)) { targetObject.InvokeInComponent(componentName, methodName, parameters.ToArray()); } } } } Problem Now for the problem... Unity freezes and ultimately crashes during this method. Why? Because when OnActionEnd is triggered after an action, it begins the foreach loop with invoking the first action in the list NextActions. Since that NextAction could be just one line of code (e.g. SetColor), it will immediately trigger OnActionEnd, causing the loop to.. loop. Forever, because the first call to OnActionEnd has never finished and is at that time still at the ActionManager.InvokeAction(actionID); line. foreach ActionManager.InvokeAction(actionID); Unfortunately, I don't know the details of what's exactly going wrong nor have I got a solution for this. I've got a feeling I'm totally overlooking this, but since I'm the only programmer on the team I don't have someone to brainstorm with. Tried solutions I've tried a multi-threaded approach, which seemed to work at first, until I started invoking methods involving Unity components.. (you can't invoke UnityEngine methods outside the main thread). I've tried using delegates with the event system but since the ScheduleAction script needs to be serializable (thus can't inherit from MonoBehaviour), I can't use the OnEnable() and OnDisable() methods to subscribe the InvokeAction() to events. OnEnable() OnDisable() InvokeAction() I've tried using a regular for-loop, to no avail. did you try using a simple for instead of for each? unity crashes when there is an infinite loop that happened to me as well and i was mad because there is no autosave :P @ExtinctSpecie, thank you for your suggestion. As a matter of fact, I've actually tried that as well, to no avail. I've included it in the tried solutions section. To me it sounds like if you have 2 actions that both have eachother as "follow up actions" and you execute either one of them, it will already create the kind of endless recursion you are dealing with. Even if there were actions without follow ups, all it takes is 2 actions that end up executing eachother and you're stuck. I guess first you at least have figure out if this is the problem. If it is, that needs to be solved on paper, design-wise before writing more code. If you have actions ColorizeToGreen and ColorizeToRed and they both lead into executing eachother, what should the result be? A red object, a green object or an object that changes color every x 313 People are following this question. Multiple Cars not working 1 Answer Distribute terrain in zones 3 Answers Question on Using a Foreach Loop on Nested Children 1 Answer Regarding Update() Performance 0 Answers Using foreach still bad in 4.6 ? 1 Answer
http://answers.unity3d.com/questions/1343073/unity-freezes-after-invoking-method-in-loop-of-the.html
CC-MAIN-2017-34
en
refinedweb
The) Allowing Capability Mode---------------------------Capsicum also includes 'capability mode', which locks down the availablesyscalls so the rights restrictions can't just be bypassed by openingnew file descriptors. More precisely, capability mode prevents accessto syscalls that access global namespaces, such as the filesystem or theIP:port space.The existing seccomp-bpf functionality of the kernel is a good mechanismfor implementing capability mode, but there are a few additional detailsthat also need to be addressed. a) The capability mode filter program needs to apply process-wide, not just to the current thread. b) In capability mode, new files can still be opened with openat(2) but only if they are beneath an existing directory file descriptor. c) In capability mode it should still be possible for a process to send signals to itself with kill(2)/tgkill(2)ffunctionality gives the tools needed to implement capability mode "..". The same restriction appliesprocess-wide for a process in capability mode.As this seemed like functionality that might be more generally useful,I've implemented it independently as a new O_BENEATH flag for openat(2).The Capsicum code then always triggers the use of that flag when the dfdis a Capsicum capability, or when the prctl(2) command described aboveis in play. [FreeBSD has the openat(2) relative-only behaviour for capability DFDs and processes in capability mode, but does not include the O_BENEATH flag.]6) Patchset Notes-----------------I've appended the draft patchset (against v3.16-rc5) for theimplementation of Capsicum capabilities, in case anyone wants to diveinto).Also, I've left a gap in the syscall and prctl(2) command numbering, toallow this to be merged on top of Kees Cook's seccomp(2) changes.Regards,David Drysdale[1][2] since v1: - removed gratuitous LSM hooks [Andy Lutomirski, Paul Moore] - renamed O_BENEATH_ONLY to O_BENEATH [Christoph Hellwig] - updated syscall numbers to allow for seccomp(2) - added prctl(PR_SET_OPENAT_BENEATH) [Paolo Bonzini] - added tid/tgid info to seccomp_data [Paolo Bonzini] - update spacing for current checkpatch.pl - [manpages] describe struct cap_rights [Andy Lutomirski] - [manpages] clarify nioctl use [Andy Lutomirski] - [manpages] clarify CAP_FCNTL use [Andy Lutomirski]David Drysdale (11): fs: add O_BENEATH flag to openat(2) selftests: Add test of O_BENEATH &: invoke Capsicum on FD/file conversion capsicum: add syscalls to limit FD rights capsicum: prctl(2) to force use of O_BENEATH seccomp: Add tgid and tid into seccomp_data | 11 +- fs/namei.c | 310 ++++++++++++++----- | 17 +- | 72 +++++ include/linux/file.h | 136 +++++++++ include/linux/namei.h | 10 + include/linux/net.h | 16 + include/linux/sched.h | 3 + include/linux/syscalls.h | 12 + include/uapi/asm-generic/errno.h | 3 + include/uapi/asm-generic/fcntl.h | 4 + include/uapi/linux/Kbuild | 1 + include/uapi/linux/capsicum.h | 343 +++++++++++++++++++++ include/uapi/linux/prctl.h | 14 + include/uapi/linux/seccomp.h | 10 + ipc/mqueue.c | 30 +- kernel/events/core.c | 14 +- kernel/module.c | 10 +- kernel/seccomp.c | 2 + kernel/sys.c | 33 +-/capsicum-rights.c | 201 +++++++++++++ security/capsicum-rights.h | 10 + security/capsicum.c | 380 ++++++++++++++++++++++++ sound/core/pcm_native.c | 10 +- tools/testing/selftests/Makefile | 1 + tools/testing/selftests/openat/.gitignore | 3 + tools/testing/selftests/openat/Makefile | 24 ++ tools/testing/selftests/openat/openat.c | 146 +++++++++ virt/kvm/eventfd.c | 6 +- virt/kvm/vfio.c | 12 +- 120 files changed, 2818 insertions(+), 533
https://lkml.org/lkml/2014/7/25/426
CC-MAIN-2017-34
en
refinedweb
Some of the most engaging Web page effects occur when one element replaces another. For example, a paragraph may seem to change into an image or a table might appear where a drop-down menu once stood. You can spend time writing JavaScript code to make this happen or save time by calling jQuery’s replaceWith method. ReplaceWith: It Only Takes One Line of Code A web page may contain hundreds of elements such as hyperlinks, text boxes, and headings. Because elements have top and left properties, you can move an element anywhere by changing the values of those properties. You could even move a button on top of an image if you liked. However, this type of element manipulation is not the same as using jQuery’s replaceWith method; that method replaces an element with something else. “Something else” can be another element, an array of elements, an HTML string or a jQuery object. You don’t need an elephant’s memory to remember the syntax of the replaceWith method. The following line of code shows how simple this method is: $("#paragraph1").replaceWith("Replacement text"); When this code runs, the replaceWith method operates on the selector that appears after the dollar sign. A selector identifies one or more elements on a web page. In this example, the selector is “#paragraph1.” The hash symbol lets jQuery know that you want it to operate on the element with an ID of paragraph1. Because this code passes “Replacement text” to the replaceWith method, browsers find that paragraph and replace it with “Replacement text.” Need to learn how jQuery selectors work? Udemy tips can help. You can also create a text string that contains HTML and pass that to the replaceWith method, as shown below: ("#paragraph1").replaceWith("<h1>This is a Heading</h1>"); This statement replaces the paragraph with an h1 heading. This type of string replacement is ideal for constructing an HTML element in real time and using it to replace an element on your Web page. For example, you could replace a check box with a hyperlink by creating an HTML string that contains an anchor tag and passing that string to the replaceWith method. Add More Functionality to ReplaceWith Using Functions Another way to pass an HTML string to the replaceWith method is to use a function as an argument. That function can create the string that the replaceWith method uses in its replacement operation. The following code shows how you might pass the name of a function named myFunction to the replaceWith method: $("#paragraph1").replaceWith(myFunction); Assume that the myFunction method looked like the one shown below: function myFunction() { var htmlVal = "<h1>This is a Heading</h1>"; return (htmlVal); } This function creates an HTML string that defines an h1 heading. The function passes that string back to the replaceWith method. When that happens, jQuery replaces the paragraph whose ID is “paragraph1” with the h1 heading. If you need to create a complex HTML element using text, a function is an ideal place to do that. Do not put quotes around the function name that you pass to the replaceWith method. Working with Real Elements JQuery allows you to replace an HTML element by passing the ID of a real element to the replaceWith method. Unlike HTML strings that create elements dynamically, real elements already reside somewhere on a Web page. Consider the statement shown below: $("#paragraph1").replaceWith( $("#textBox1") ); This code assumes that your HTML document contains an element whose ID is “textBox1.” The syntax for passing a real element to replaceWith is a little different from the syntax you use to pass it an HTML string. To pass an element, surround the selector with quotes. The selector in this example is the ID of the element you wish to replace. Remember that when you work with IDs, you place a hash symbol before the ID name. Finally, you put the selector inside parenthesis and place a dollar sign before the left parenthesis. When you do this, you get the following: $(“#textBox1”) That becomes the argument to the replaceWith method. Learn more about jQuery syntax from Udemy. When your code runs, jQuery replaces the paragraph with the element with an ID of “textBox1.” The interesting thing about this type of replacement is that the replacement element moves from its original location to the location of the element it’s replacing. If the replacement element was a text box, you’d see the paragraph disappear and your text box appear in its place. Any text in the text box would also move along with the text box. The replacement process is similar to cloning an element. The only difference is that the object you’re replacing disappears from the Web page. If you don’t want any elements on a page to disappear, do not use the replaceWith method. Instead, use another coding technique to move an element to its desired location. As you can see, the replaceWith method enables you to handle quite a few replacement scenarios quickly. You may also find a use for the replaceAll method. It can replace all elements of a specific type with other elements, as shown in the following example: $("<h1>This is a Heading</h1>").replaceAll("p"); Note that the argument in the replaceAll method is the selector that represents the element you want to replace. That element is “p” in this example. The content that replaces that element resides after the dollar sign at the beginning of the statement. When this code runs, jQuery replaces all paragraphs with h1 headings. Putting it All Together While making all paragraphs change into links might be fun to see, there are practical uses for replacing elements with other elements. For instance, you might want images to change to the word “Selected” when people clicked them. You could do that by giving images unique IDs and adding a click event to each image. The click event would call a function that executed the replaceWith method. Your selector would be the ID of the image someone clicked and the replacement content could consist of an HTML string that defined a span tag, as shown below: $("#image1").replaceWith("<span>Selected</span>"); Now that you know how to replace elements on a Web page, you’ll probably discover other ways the replaceWith method can help you create more compelling content that changes as people interact with your website. Get great tips about jQuery event handlers at Udemy.
https://blog.udemy.com/jquery-replacewith/
CC-MAIN-2017-34
en
refinedweb
Control digital loggers web power switch Project Description Release History Download Files DESCRIPTION This is a python module and a script to mange the Digital Loggers Web Power switch. The module provides a python class named PowerSwitch that allows managing the web power switch from python programs. When run as a script this acts as a command line utility to manage the DLI Power switch. SUPPORTED DEVICES This module has been tested against the following Digital Loggers Power network power switches: - WebPowerSwitch II - WebPowerSwitch III - WebPowerSwitch IV - WebPowerSwitch V - Ethernet Power Controller III Example from __future__ import print_function import dlipower print('Connecting to a DLI PowerSwitch at lpc.digital-loggers.com') switch = dlipower.PowerSwitch(hostname="lpc.digital-loggers.com", userid="admin") print('Turning off the first outlet') switch.off(1) print('The powerstate of the first outlet is currently', switch[0].state) print('Renaming the first outlet as "Traffic light"') switch[0].name = 'Traffic light' print('The current status of the powerswitch is:') print(switch) Connecting to a DLI PowerSwitch at lpc.digital-loggers.com Turning off the first outlet The powerstate of the first outlet is currently OFF Renaming the first outlet as "Traffic light" The current status of the powerswitch is: DLIPowerSwitch at lpc.digital-loggers.com Outlet Hostname State 1 Traffic light OFF 2 killer robot ON 3 Buiten verlicti ON 4 Meeting Room Li OFF 5 Brocade LVM123 ON 6 Shoretel ABC123 ON 7 Shortel 24V - T ON 8 Shortel 24V - T ON Release History Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/dlipower/
CC-MAIN-2017-34
en
refinedweb
Difference between revisions of "RetroArch" Latest revision as of 14:20, 4 August 2017 1. Install the retroarch package or alternatively retroarch-gitAUR for the development version. 2. Install retroarch-assets-xmb package to Show XMB menu assets , Properly. --libretro /usr/lib/libretro/libretro-core.so path/to/rom A default emulation core can be defined in the configuration, obviating the need to specify it on every run. /etc/retroarch.cfg or ~/.config/retroarch/retroarch.cfg libretro_path = "/usr/lib/libretro/libretro-core.so" Configuration RetroArch provides a very well commented skeleton configuration file located at /etc/retroarch.cfg. Copy the skeleton configuration file to your home directory $ cp /etc/retroarch.cfg ~/.config/retroarch/retroarch.cfg It supports split configuration files using the #include "foo.cfg" directive within the main configuration file, retroarch.cfg. This can be overridden using the --appendconfig /path/to/config parameter and is beneficial if different keybinds, video configurations or audio settings are required for the various implementations. Paths Some modifications are required to use the correct paths for Arch packages: assets_directory = "/usr/share/retroarch/assets" libretro_info_path = "/usr/share/libretro/info" libretro_directory = "/usr/lib/libretro" joypad_autoconfig_dir = "/usr/share/retroarch/autoconfig"; see #Paths above., e.g. # usermod -a -G input username Alternatively, manually add a rule in /etc/udev/rules.d/99-evdev.rules, with KERNEL=="event*", NAME="input/%k", MODE="666" as its contents. Reload udev rules by running: # udevadm control --reload-rules If rebooting the system or replugging the devices are not options, permissions may be forced using: # chmod 666 /dev/input/event* Poor video performance If poor video performance is met, RetroArch may be run on a separate thread by setting video_threaded = true in ~/.config/retroarch/retroarch.cfg. This is, however, a solution that should be not be used if tweaking RetroArch's video resolution/refresh rate fixes the problem, as it makes perfect V-Sync impossible, and slightly increases latency.
https://wiki.archlinux.org/index.php?title=RetroArch&diff=cur&oldid=251952
CC-MAIN-2017-34
en
refinedweb
#define MY_MACRO 3 --> in A.h #define MY_MACRO 45 --> B.h #include "A.h" #include "B.h" int my_value = MY_MACRO; From the standard (draft) [cpp.replace] §2: An identifier currently defined as an object-like macro (see below) may be redefined by another #define preprocessing directive provided that the second definition is an object-like macro definition and the two replacement lists are identical, otherwise the program is ill-formed. [...] What happens when you redefine a macro? When the new definition is different, your program is ill-formed. The compiler is required to show you a diagnostic message (a warning, or an error). The behaviour of an ill-formed program is not defined by the standard. The compiler is free to refuse compiling the program. What will my_value be 3 or 45? Whatever your pre-processor/compiler chooses. Or the compiler could refuse to compile it. Technically, the program would become well-formed if you first undefined the macro. Then the defined value would obviously be the newly defined. However, I do not recommend this, because you can then easily break other rules depending on the order of inclusion of headers in multiple translation units. Most likely, the two macros are supposed to be separate entities, and there are different files that expect the definition from one header, and not the other. The correct solution is to give each a unique name by renaming one, and change the dependant files to use the new name. Figuring out which files use which definition is may be a challenge. While you're at it, you may want to replace the macro with a constexpr variable.
https://codedump.io/share/ocAcviUMdvYE/1/what-happens-when-you-redefine-a-macro
CC-MAIN-2017-34
en
refinedweb
Markets returns of a time series, namely Bitcoin, only using text data from relevant articles. BERT, an NLP deep learning network, will be used to do sentiment analysis on the text. I’ve chosen Bitcoin for this experiment since its value has enormous volatility and it is very prone to change by sudden hypes and fears, usually reflected in newspapers. Although Bitcoin is a cryptocurrency and not a stock, strictly speaking, it can be bought and sold in the same fashion. This makes it perfectly suitable for our needs. Text regression: What are we up to? The idea in this post is to use NLP to do text regression. This technique consists of encoding input text as numerical vectors and then use them to make a regression analysis and estimate an output value. In our case, input data will be text from articles related to Bitcoin encoded and transformed with BERT, and the target value will be the returns of Bitcoin’s close values at the publication date of such articles. Background check: What is BERT? BERT (Bidirectional Encoder Representations from Transformers) is a neural net tasked to solve any kind of NLP problem. Developed by researchers at Google, it soon became the state of art, breaking records in many different NLP benchmarks (paper). Language modeling networks are usually trained by randomly masking words in each sentence and trying to predict them given the previous or following ones. For example, we mask the word “mat” in the sentence: “The cat sat on the ____.” The network will try to guess the word “mat” knowing the previous ones. Given millions of ordered sentences, these networks learn to predict accurately the empty spaces looking at the text either before or after the masked word, but not both. If you train the network with information from both sides the model would overfit since you would be implicitly telling the answer to it when training with other sentences. This is where BERTs core novelty comes into play since it is designed to learn from the text before and after the masked words (that is what Bidirectional means). BERT overcomes this difficulty by using two techniques Masked LM (MLM) and Next Sentence Prediction (NSP), out of the scope of this post. This lets BERT have a much deeper sense of language context than previous solutions. Creating the dataset To retrieve articles related to Bitcoin I used some awesome python packages which came very handy, like google search and news-please. The former emulates a search in google and retrieves a set of URLs. The latter extracts a lot of information from an article (publication date, authors, main title, main text, etc….) given an URL. Combining these two, I retrieved the top 5 articles written in English that came up in google news by introducing the search term: “Bitcoin | Cryptocurrency” for every day between 2019-01-01 and 2020-03-19. Being able to simulate a google search guarantees that the top articles that appear on the list are the most relevant ones, and one could guess that the ones that had more impact. The bar between the two keywords in the search term acts as the OR function. In total, I collected a dataset of 2210 articles. Here you have a small sample: Sentiment analysis with BERT Here comes the interesting part, it’s time to extract the sentiment of all the text we’ve just gathered. BERT is a heavyweight when it comes to computational resources so, after some tests, I decided to work only with the text in the title and description of each article. I split all these pieces of text into sentences. In the end, my dataset consisted of a bunch of sentences grouped by the day they were published. I used a version of BERT available as a Huggingface transformer which is pre-trained to do sentiment analysis on product reviews. Given a product review, it predicts its “sentiment” as a number of stars (between 1 and 5). Even though product review text and newspaper text are fairly different, we will see that this model works surprisingly well on our data. Here you have an example: sentence = "Bitcoin futures are trading below the cryptocurrency's spot price" sentence_ids = tokenizer.encode(sentence) bert_model.predict([sentence_ids]) # 1 star 2 stars 3 stars 4 stars 5 stars [[ 0.62086743 0.7408671 0.599566 -0.50914824 -1.2169912 ]] The transformer comes in two parts: the main model, in charge of making the sentiment predictions, and the tokenizer, used to transform the sentence into ids which the model can understand. The tokenizer does this by looking up each word in a dictionary and replacing it by its id. Before making predictions for all our sentences in our dataset we need to make sure that the model understands the most important words related to bitcoin. After word-counting my sentences based on a set of keywords, I added these terms to the tokenizer [‘bitcoin’, ‘cryptocurrency’, ‘crypto’, ‘cryptocurrencies’, ‘blockchain’], which got assigned to specific ids. Adding these terms lets the network distinguish them as individual words, otherwise, all of them would have been replaced by the id of “unknown words”. 5-star predictions to stock returns Afterward, BERT did 5-star predictions for all the sentences, just as if they were reviews of products available in Amazon. I computed the averages of each of the stars for the sentences which belonged to each day and I trained a simple LSTM network on the resulting data. def lstm_model(n_steps): n_stars = 5 model = Sequential() model.add(LSTM(100, input_shape = (n_steps, n_stars), return_sequences = True)) model.add(TimeDistributed(Dense(20, activation='elu'))) model.add(Flatten()) model.add(Dense(1, activation='elu')) model.compile(loss='mean_squared_error', optimizer='adam') print(model.summary()) return model I trained the model with last year’s data (articles between 1/1/2019 and 1/11/2019) and tested it on 2020 data (up to 19/3). After playing with the network’s hyperparameters for a while, it yielded the following result. What did just happen? I know, it’s far from perfect. But if you look a little closer you will notice how there are trends that have been recognized by BERT and the LSTM duo. The predictions start in November and go down correctly with the real series. Christmas is a complete mess since our model keeps predicting a downward trend when Bitcoin is rising abruptly. Then we can see how the model reacts and reaches a maximum, which beautifully, coincides with the one in the real series. About the predictions at the beginning of March, no comments. It is true that such a stark change is difficult to foresee (even by newspapers?). In general, the model can predict small peaks and valleys more or less accurately. This is a very good sign if we take into account that the model is not using anything but text as input. Take a look at the figure with the raw returns below. The quality of our data is not extremely good. In the end, we are using the top 5 articles which are more “relevant” according to google (whatever that means). If we collected the text more thoroughly we would probably obtain better results. Raw vs predicted returns for the test period Tesla: a second opinion I was curious about how this approach would do in a different case. Tesla is a stock which behaves pretty crazily. Elon Musk writes an enigmatic tweet and TSLA stock shakes. This is perfect for our experiment. I followed the exact same process. First, I retrieved the top 5 articles for each day. Then I passed the text through BERT and applied an LSTM network in the end. Here are the results: We can see how the model predictions follow the starting upward trend identifying some valleys in January and February. Thus, it completely ignores the price drop in March (but who saw that coming, right?). Here you have the raw returns: Key takeaways The LSTM used sequences of 10 timesteps (that is, using data from the past 10 days to predict tomorrow’s returns). When using a larger or lower number of timesteps the predictions became unstable. To gain stability, we could use the price difference across days as target value instead of returns. Another option is to design a strategy that predicts a fixed growth depending on the positivity or negativity of the sentiment. To conclude this post, I want to highlight the following points: - It seems that news articles influence market movements at times. - BERT is an extremely powerful network capable of solving many NLP tasks, among other sentiment analysis. - NLP models based on news data could be useful to complement investment portfolio strategies. To check the code used for this post, take a look at my Github repository related posts Hi Juan, thanks for the great post. I’ve also been developping market forecasting models based on sentiment. In my case, for the FX market at least, using larger time frames seemed to perform better. For instance, averaging sentiment from 30 days in the past to forecast return (and not actual prices) 2 weeks into the future.
https://quantdare.com/can-neural-networks-predict-the-stock-market-just-by-reading-newspapers/
CC-MAIN-2022-40
en
refinedweb
This page was generated from od/methods/llr.ipynb. Likelihood Ratios for Outlier Detection Overview The outlier detector described by Ren et al. (2019) in Likelihood Ratios for Out-of-Distribution Detection uses the likelihood ratio (LLR) between 2 generative models as the outlier score. One model is trained on the original data while the other is trained on a perturbed version of the dataset. This is based on the observation that the log likelihood for an instance under a generative model can be heavily affected by population level background statistics. The second generative model is therefore trained to capture the background statistics still present in the perturbed data while the semantic features have been erased by the perturbations. The perturbations are added using an independent and identical Bernoulli distribution with rate \(\mu\) which substitutes a feature with one of the other possible feature values with equal probability. For images, this means for instance changing a pixel with a different pixel value randomly sampled within the \(0\) to \(255\) pixel range. The package also contains a PixelCNN++ implementation adapted from the official TensorFlow Probability version, and available as a standalone model in alibi_detect.models.tensorflow.pixelcnn. Usage Initialize Parameters: threshold: outlier threshold value used for the negative likelihood ratio. Scores above the threshold are flagged as outliers. model: a generative model, either as a tf.keras.Model, TensorFlow Probability distribution or built-in PixelCNN++ model. model_background: optional separate model fit on the perturbed background data. If this is not specified, a copy of modelwill be used. log_prob: if the model does not have a log_probfunction like e.g. a TensorFlow Probability distribution, a function needs to be passed that evaluates the log likelihood. sequential: flag whether the data is sequential or not. Used to create targets during training. Defaults to False. data_type: can specify data type added to metadata. E.g. ‘tabular’ or ‘image’. Initialized outlier detector example: from alibi_detect.od import LLR from alibi_detect.models.tensorflow import PixelCNN image_shape = (28, 28, 1) model = PixelCNN(image_shape) od = LLR(threshold=-100, model=model) Fit We then need to train the 2 generative models in sequence. The following parameters can be specified: X: training batch as a numpy array of preferably normal data. mutate_fn: function used to create the perturbations. Defaults to an independent and identical Bernoulli distribution with rate \(\mu\) mutate_fn_kwargs: kwargs for mutate_fn. For the default function, the mutation rate and feature range needs to be specified, e.g. dict(rate=.2, feature_range=(0,255)). loss_fn: loss function used for the generative models. loss_fn_kwargs: kwargs for the loss function. optimizer: optimizer used for training. Defaults to Adam with learning rate 1e-3. epochs: number of training epochs. batch_size: batch size used during training. log_metric: additional metrics whose progress will be displayed if verbose equals True. od.fit(X_train, epochs=10, batch_size=32) It is often hard to find a good threshold value. If we have a batch of normal and outlier data and we know approximately the percentage of normal data in the batch, we can infer a suitable threshold: od.infer_threshold(X, threshold_perc=95, batch_size=32)). batch_size: batch size used for model prediction calls.', batch_size=32) Examples Image Likelihood Ratio Outlier Detection with PixelCNN++ Sequential Data Likelihood Ratio Outlier Detection on Genomic Sequences
https://docs.seldon.io/projects/alibi-detect/en/latest/od/methods/llr.html
CC-MAIN-2022-40
en
refinedweb
Event API. - On Windows* OS platforms you can define Unicode to use a wide character version of APIs that pass strings. However, these strings are internally converted to ASCII strings. - On Linux* OS platforms only a single variant of the API exists. Guidelines for Event API Usage - An__itt_event_end()is always matched with the nearest preceding__itt_event_start(). Otherwise, the__itt_event_end()call is matched with the nearest unmatched__itt_event_start()preceding it. Any intervening events are nested. - You can nest user events of the same or different type within each other. In the case of nested events, the time is considered to have been spent only in the most deeply nested user event region. - You can overlap different ITT API events. In the case of overlapping events the time is considered to have been spent only in the event region with the later__itt_event_start(). Unmatched__itt_event_end()calls are ignored. To see events and user tasks in your results, create a custom analysis (based on the pre-defined analysis you are interested in) and select the Analyze user tasks, events and counterscheckbox in the analysis settings. Usage Example: Creating and Marking Single Events The __itt_event_createAPI returns a new event handle that you can subsequently use to mark user events with the __itt_event_startAPI. In this example, two event type handles are created and used to set the start points for tracking two different types of events. #include "ittnotify.h" __itt_event mark_event = __itt_event_create( "User Mark", 9 ); __itt_event frame_event = __itt_event_create( "Frame Completed", 15 ); ... __itt_event_start( mark_event ); ... for( int f ; f<number_of_frames ; f++ ) { ... __itt_event_start( markframe_event ); } Usage Example: Creating and Marking Event Regions The __itt_event_startAPI can be followed by an __itt_event_endAPI to define an event region, as in the following example: #include "ittnotify.h" __itt_event render_event = __itt_event_create( "Rendering Phase", 15 ); ... for( int f ; f<number_of_frames ; f++ ) { ... do_stuff_for_frame(); ... __itt_event_start( render_event ); ... do_rendering_for_frame(); ... __itt_event_end( render_event ); ... }
https://www.intel.com/content/www/us/en/develop/documentation/vtune-help/top/api-support/instrumentation-and-tracing-technology-apis/instrumentation-tracing-technology-api-reference/event-api.html
CC-MAIN-2022-40
en
refinedweb
TL;DR - The Resource Groups Tagging API can help you fetch resource tags in bulk, even if you don't use resource groups! The Problem You want to programmatically build a list of active AWS resources. For a service like EC2, you call DescribeInstances and get tags included in the response. Yay! For other services (I'll use RDS here), you need to do this in two steps: - Scan for active resources (DescribeDBInstances) - Fetch tags for those resources There are a couple different ways to handle that second step, and the simplest one will start to hurt as your resource count grows. Fetching Tags - Simplest Way The RDS API has a ListTagsForResource action. If you've got resources and need their tags, that's the most obvious way to get them. The wrinkle is that you can only fetch tags for one resource at a time. So that's a bit frustrating, but with a bit of looping it all seems doable: import boto3 rds = boto3.client('rds') # Build a mapping of instance ARNs to details instances = { instance['DBInstanceArn']: instance for instance in rds.describe_db_instances()['DBInstances'] } # Add tag detail to each instance for arn, instance in instances.items(): instance['Tags'] = rds.list_tags_for_resource(ResourceName=arn).get('TagList') This works fine for a handful of resources, but will eventually choke for a couple reasons: DescribeDBInstancescan only return up to 100 records in a single call. Some of you likely noticed this oversight. Fortunately we can work around that by using boto3's excellent paginator support. If you solve for the pagination issue, that means you have over 100 resources. That also means you're bombarding the RDS API with over 100 ListTagsForResourcecalls over a short period of time. While AWS doesn't currently publicize the rate limits for the RDS API, they will still bite you. Avoiding API Rate Limits / Throttling If you bump into API rate limits while trying to fetch resource tags, you'll probably have two thoughts: - I'll be fine if I'm responsible about using backoff / retry logic. - Still, I really wish I could pull tags for more than one resource at a time! The Resource Groups Tagging API I have to be honest here, I completely ignored the Resource Groups Tagging API for a long time. I don't use Resource Groups much, and I wrongly assumed that something called the Resource Groups Tagging API would be aimed at managing tags for Resource Groups. I still think that was a reasonable assumption... But for our purpose here, let's look specifically at the GetResources action. As the docs point out, it: Returns all the tagged or previously tagged resources that are located in the specified region for the AWS account. Well how about that! That means code like this can help find all DB instance tags in a given account and region: instance_tags = rds.get_resources(ResourceTypeFilters=['rds:db']) We still need to address paginated responses to handle more than 100 resources, but this is already a huge win: - In the most extreme case, we've reduced our API call count by a factor of 100. - This reduced number of calls doesn't even target the RDS API anymore. That means we get the data we need more quickly, with less risk of error and less impact to others working in the same account and region. Putting It All Together With all of this in mind, here's a breakdown of just one way to get a list of RDS instances and tags. from itertools import chain import boto3 rds = boto3.client('rds') tagging = boto3.client('resourcegroupstaggingapi') We'll use chain in a little bit to make working with lists of lists nicer. # Build a mapping of instance ARNs to details paginator = rds.get_paginator('describe_db_instances') instances = { instance['DBInstanceArn']: instance for page in paginator.paginate() for instance in page['DBInstances'] } This is a pagination-friendly version of the mapping we built earlier. # Fetch tag data for all tagged (or previously tagged) RDS DB instances paginator = tagging.get_paginator('get_resources') tag_mappings = chain.from_iterable( page['ResourceTagMappingList'] for page in paginator.paginate(ResourceTypeFilters=['rds:db']) ) The get_resources() paginator gives us a collection of pages, and each page has a collection of results. chain.from_iterable() helps us treat this "list of lists" as a single collection. Using rds:db as a resource type filter ensures that we only fetch tags for RDS DB instances, rather than bringing other RDS resources (like snapshots) along for the ride. # Add tag detail to each instance for tag_mapping in tag_mappings: # Convert list of Key/Value pairs to dict for convenience tags = {tags['Key']: tags['Value'] for tags in tag_mapping['Tags']} instances[tag_mapping['ResourceARN']]['Tags'] = tags Thanks to chain.from_iterable(), we can loop over tag_mappings as if it were a flat list. Since having a tag dictionary is typically more useful than a list of Key/Value pairs, we may as well convert it. Acknowledgements Big shout out here to the Cloud Custodian project. Its centralized approach to tag-fetching was what helped me realize how mistaken I was to overlook the Resource Groups Tagging API for so long. For AWS discussions that go beyond or outside the official documentation, I've found the Open Guide to AWS repo and Slack channel to be immensely useful. Feedback If you've gotten this far, thanks for reading! I like to chat about Python and/or AWS, so please say hi :). Please fire away in the comments if you have suggestions to improve this post, or better ways to fetch tags at scale. Top comments (3) Great write-up. However the Resource Groups API still has a major flaw in that it cannot give you tag data on provisioned resources that have never been tagged. That's true. Out of curiosity, when has that been an issue for you? When I'm pulling tags, I'm typically also pulling other information from a service API and merging it with the tag data. There have been a couple exceptions though: When I've wanted a central, service-neutral way to report on untagged resources. This isn't bad to work around most of the time. When I wanted to reliably list all SQS queues in an account with more than 1,000 queues. This one is a bit more tedious since ListQueuesdoesn't support pagination. It's doable by listing queues with different name prefixes, just feels like an awkward edge case. If the Resource Groups Tagging API could reliably list all queues regardless of whether they had ever been tagged, it would have been a nice surprise. I'm curious about other places where the Resource Groups Tagging API is almost a good fit. One major issue that comes up is if you want to use tags to track billing. This requires some sort of tagging compliance system where untagged resources in an application are detected and remediated so that organizations can have the most accurate data possible on the cost of their applications. If the Resource Group API also returned provisioned resources that were never tagged, tagging governance would be a lot less complicated in general.
https://practicaldev-herokuapp-com.global.ssl.fastly.net/ajkerrigan/fetch-a-bunch-of-aws-resource-tags-without-being-throttled-4hhc
CC-MAIN-2022-40
en
refinedweb
The effect is to set a trigger box. When the player triggers, a game end interface is displayed and the game ends. 1. New canvas Create a new canvas in hierarchy. Named canvas. Double click the canvas you just created. If necessary, its properties can be adjusted. By default, this canvas will cover the entire screen. When editing the UI, you should turn off the special effects and open the 2D interface. (as shown in the figure below) 2. New background Select the canvas you just created and create a new UI image under canvas. Name it background. This component is used to set the background of the UI. Select the background you just created. Adjust the value to make it spread all over the canvas. You can also change its background color by changing the color property of its image component. (as shown in the figure below) 3. Add picture on background Right click the background just created, and then create a new UI image as its sub item, named image. Click the new component to add pictures for it. Its position can also be adjusted. (as shown in the figure below) The effect is as follows. 4. Add canvas group component for background The UI cannot be displayed at the beginning, so it should be set to be transparent. Select the created background and add the canvas group component to it. Set the alpha attribute to 0 in this component. In this way, the UI will be transparent at the beginning. You can change its alpha to make it display when you need it. 5. End trigger and display UI Create an empty component, add the box collaboration component, and open the is trigger attribute. Add a script for this empty component, named gameending. The code is as follows: using UnityEngine; public class GameEnding : MonoBehaviour { bool PlayerAtExit = false; public GameObject player; //UI public CanvasGroup backgroundImageCanvasGroup; //Time to display UI public float disableImageDuration = 4.1f; //Transparency float timer; //Time to change transparency public float fadeDuration = 1.0f; //Trigger event, trigger of incoming control private void OnTriggerEnter(Collider other) { //If the player enters the trigger if (other.gameObject == player) { PlayerAtExit = true; } } // Update is called once per frame void Update() { if (PlayerAtExit) { EndLevel(); } } //End the level void EndLevel() { timer += Time.deltaTime; backgroundImageCanvasGroup.alpha = timer / fadeDuration; if (timer > fadeDuration + disableImageDuration) { //Exit application (effective after packaging) Application.Quit(); //Exit in editor UnityEditor.EditorApplication.isPlaying = false; } } } Pass the role and background into the script. Running the game, you can see that the UI is triggered successfully after the character walks into the trigger box.
https://developpaper.com/unity-create-a-simple-ui-interface-in-unity/
CC-MAIN-2022-40
en
refinedweb
On Thu, Feb 04, 2021 at 02:50:08PM +0000,). Yes, broadly speaking I would actually agree with this. I think much of this could easily live outside of qemu.git, beit a separate repo uner QEMU project namespace, or a complete 3rd party. Especially the vhost-user stuff has no dependency on QEMU in general and could be used with other KVM userspaces. > If we care about a bit of code enough to keep it in our source > tree we ought to care about it enough to properly document > and test it and give it a suitable place to live. Regards, Daniel -- |: -o- :| |: -o- :| |: -o- :|
https://lists.gnu.org/archive/html/qemu-devel/2021-02/msg01587.html
CC-MAIN-2022-40
en
refinedweb
Draw. #include <CGAL/draw_periodic_2_triangulation_2.h> opens a new window and draws ap2t2, the Periodic_2_Triangulation_2. A call to this function is blocking, that is the program continues as soon as the user closes the window. This function requires CGAL_Qt5, and is only available if the macro CGAL_USE_BASIC_VIEWER is defined. Linking with the cmake target CGAL::CGAL_Basic_viewer will link with CGAL_Qt5 and add the definition CGAL_USE_BASIC_VIEWER.
https://doc.cgal.org/latest/Periodic_2_triangulation_2/group__PkgDrawPeriodic2Triangulation2.html
CC-MAIN-2022-40
en
refinedweb
Seasonality, Holiday Effects, And Regressors Modeling Holidays and Special Events If you have holidays or other recurring events that you’d like to model, you must create a dataframe for them. It has two columns ( holiday and ds) and a row for each occurrence of the holiday. It must include all occurrences of the holiday, both in the past (back as far as the historical data go) and in the future (out as far as the forecast is being made). If they won’t repeat in the future, Prophet will model them and then not include them in the forecast. You can also include columns lower_window and upper_window which extend the holiday out to [lower_window, upper_window] days around the date. For instance, if you wanted to include Christmas Eve in addition to Christmas you’d include lower_window=-1,upper_window=0. If you wanted to use Black Friday in addition to Thanksgiving, you’d include lower_window=0,upper_window=1. You can also include a column prior_scale to set the prior scale separately for each holiday, as described below. Here we create a dataframe that includes the dates of all of Peyton Manning’s playoff appearances: Above we have included the superbowl days as both playoff games and superbowl games. This means that the superbowl effect will be an additional additive bonus on top of the playoff effect. Once the table is created, holiday effects are included in the forecast by passing them in with the holidays argument. Here we do it with the Peyton Manning data from the Quickstart: The holiday effect can be seen in the forecast dataframe: The holiday effects will also show up in the components plot, where we see that there is a spike on the days around playoff appearances, with an especially large spike for the superbowl: Individual holidays can be plotted using the plot_forecast_component function (imported from prophet.plot in Python) like plot_forecast_component(m, forecast, 'superbowl') to plot just the superbowl holiday component. Built-in Country Holidays You can use a built-in collection of country-specific holidays using the add_country_holidays method (Python) or function (R). The name of the country is specified, and then major holidays for that country will be included in addition to any holidays that are specified via the holidays argument described above: You can see which holidays were included by looking at the train_holiday_names (Python) or train.holiday.names (R) attribute of the model: The holidays for each country are provided by the holidays package in Python. A list of available countries, and the country name to use, is available on their page:. In addition to those countries, Prophet includes holidays for these countries: Brazil (BR), Indonesia (ID), India (IN), Malaysia (MY), Vietnam (VN), Thailand (TH), Philippines (PH), Pakistan (PK), Bangladesh (BD), Egypt (EG), China (CN), and Russian (RU), Korea (KR), Belarus (BY), and United Arab Emirates (AE). In Python, most holidays are computed deterministically and so are available for any date range; a warning will be raised if dates fall outside the range supported by that country. In R, holiday dates are computed for 1995 through 2044 and stored in the package as data-raw/generated_holidays.csv. If a wider date range is needed, this script can be used to replace that file with a different date range:. As above, the country-level holidays will then show up in the components plot: Fourier Order for Seasonalities Seasonalities are estimated using a partial Fourier sum. See the paper for complete details, and this figure on Wikipedia for an illustration of how a partial Fourier sum can approximate an arbitrary periodic signal. The number of terms in the partial sum (the order) is a parameter that determines how quickly the seasonality can change. To illustrate this, consider the Peyton Manning data from the Quickstart. The default Fourier order for yearly seasonality is 10, which produces this fit: The default values are often appropriate, but they can be increased when the seasonality needs to fit higher-frequency changes, and generally be less smooth. The Fourier order can be specified for each built-in seasonality when instantiating the model, here it is increased to 20: Increasing the number of Fourier terms allows the seasonality to fit faster changing cycles, but can also lead to overfitting: N Fourier terms corresponds to 2N variables used for modeling the cycle Specifying Custom Seasonalities Prophet will by default fit weekly and yearly seasonalities, if the time series is more than two cycles long. It will also fit daily seasonality for a sub-daily time series. You can add other seasonalities (monthly, quarterly, hourly) using the add_seasonality method (Python) or function (R). The inputs to this function are a name, the period of the seasonality in days, and the Fourier order for the seasonality. For reference, by default Prophet uses a Fourier order of 3 for weekly seasonality and 10 for yearly seasonality. An optional input to add_seasonality is the prior scale for that seasonal component - this is discussed below. As an example, here we fit the Peyton Manning data from the Quickstart, but replace the weekly seasonality with monthly seasonality. The monthly seasonality then will appear in the components plot: Seasonalities that depend on other factors In some instances the seasonality may depend on other factors, such as a weekly seasonal pattern that is different during the summer than it is during the rest of the year, or a daily seasonal pattern that is different on weekends vs. on weekdays. These types of seasonalities can be modeled using conditional seasonalities. Consider the Peyton Manning example from the Quickstart. The default weekly seasonality assumes that the pattern of weekly seasonality is the same throughout the year, but we’d expect the pattern of weekly seasonality to be different during the on-season (when there are games every Sunday) and the off-season. We can use conditional seasonalities to construct separate on-season and off-season weekly seasonalities. First we add a boolean column to the dataframe that indicates whether each date is during the on-season or the off-season: Then we disable the built-in weekly seasonality, and replace it with two weekly seasonalities that have these columns specified as a condition. This means that the seasonality will only be applied to dates where the condition_name column is True. We must also add the column to the future dataframe for which we are making predictions. Both of the seasonalities now show up in the components plots above. We can see that during the on-season when games are played every Sunday, there are large increases on Sunday and Monday that are completely absent during the off-season. Prior scale for holidays and seasonality If you find that the holidays are overfitting, you can adjust their prior scale to smooth them using the parameter holidays_prior_scale. By default this parameter is 10, which provides very little regularization. Reducing this parameter dampens holiday effects: The magnitude of the holiday effect has been reduced compared to before, especially for superbowls, which had the fewest observations. There is a parameter seasonality_prior_scale which similarly adjusts the extent to which the seasonality model will fit the data. Prior scales can be set separately for individual holidays by including a column prior_scale in the holidays dataframe. Prior scales for individual seasonalities can be passed as an argument to add_seasonality. For instance, the prior scale for just weekly seasonality can be set using: Additional regressors Additional regressors can be added to the linear part of the model using the add_regressor method or function. A column with the regressor value will need to be present in both the fitting and prediction dataframes. For example, we can add an additional effect on Sundays during the NFL season. On the components plot, this effect will show up in the ‘extra_regressors’ plot: NFL Sundays could also have been handled using the “holidays” interface described above, by creating a list of past and future NFL Sundays. The add_regressor function provides a more general interface for defining extra linear regressors, and in particular does not require that the regressor be a binary indicator. Another time series could be used as a regressor, although its future values would have to be known. This notebook shows an example of using weather factors as extra regressors in a forecast of bicycle usage, and provides an excellent illustration of how other time series can be included as extra regressors. The add_regressor function has optional arguments for specifying the prior scale (holiday prior scale is used by default) and whether or not the regressor is standardized - see the docstring with help(Prophet.add_regressor) in Python and ?add_regressor in R. Note that regressors must be added prior to model fitting. Prophet will also raise an error if the regressor is constant throughout the history, since there is nothing to fit from it. The extra regressor must be known for both the history and for future dates. It thus must either be something that has known future values (such as nfl_sunday), or something that has separately been forecasted elsewhere. The weather regressors used in the notebook linked above is a good example of an extra regressor that has forecasts that can be used for future values. One can also use as a regressor another time series that has been forecasted with a time series model, such as Prophet. For instance, if r(t) is included as a regressor for y(t), Prophet can be used to forecast r(t) and then that forecast can be plugged in as the future values when forecasting y(t). A note of caution around this approach: This will probably not be useful unless r(t) is somehow easier to forecast then y(t). This is because error in the forecast of r(t) will produce error in the forecast of y(t). One setting where this can be useful is in hierarchical time series, where there is top-level forecast that has higher signal-to-noise and is thus easier to forecast. Its forecast can be included in the forecast for each lower-level series. Extra regressors are put in the linear component of the model, so the underlying model is that the time series depends on the extra regressor as either an additive or multiplicative factor (see the next section for multiplicativity). Coefficients of additional regressors To extract the beta coefficients of the extra regressors, use the utility function regressor_coefficients ( from prophet.utilities import regressor_coefficients in Python, prophet::regressor_coefficients in R) on the fitted model. The estimated beta coefficient for each regressor roughly represents the increase in prediction value for a unit increase in the regressor value (note that the coefficients returned are always on the scale of the original data). If mcmc_samples is specified, a credible interval for each coefficient is also returned, which can help identify whether each regressor is “statistically significant”.
https://facebook.github.io/prophet/docs/seasonality,_holiday_effects,_and_regressors.html
CC-MAIN-2022-40
en
refinedweb
Just to clarify better the model of device I am talking/doing here... Valter Just to clarify better the model of device I am talking/doing here... Valter Just to clarify better the model of device I am talking/doing here... Valter Hi, this post is a follow up from a previous post on the subject of using UnitCam with Arduino IDE). In the case of Arduino IDE, we can install the older versions of the whole arduino-esp32 framework, where the esp32-camera module is part of, so, we can install the entire old package and will also get the old camera module inside it... This post details how to download older versions of the code, and "install" it on the Arduino IDE Programming Framework. The method described here is a MANUAL procedure, to download code and place then inside the Arduino framework, "by hand", so to speak... so you learn the details on how to "manually place code inside Arduino IDE"... *I also did a text/post on how to install old versions of esp32-camera, for projects using Espressif ESP-IDF Programming Framework here: [] This little tutorial DOES NOT COVER the use of the IDE Board Manager, to install software libraries, in which you can choose predefined versions releases of the code and then click in "install version xx.xx.xx", so that the IDE automatically download all respective packages and places them inside the Arduino IDE Environment... If the IDE Board Manager method already satisfy your needs, you may not need to learn the present tutorial... So, the tutorial presented here is a "manual" way to download and "install" software inside de IDE, giving the user a little bit of knowledge and control on how to choose different versions of software components (more about it at the end of this text)... This little tutorial uses RaspberryPI RaspOS GNU/Linux (based on Debian GNU/LINUX) to explain the steps, the overral concepts and procedures are the same for other environments as well. STEP 1: Run Arduino IDE in Portable Mode First, before talking about ESP32 software library, it is a good idea to explain how to run Arduino IDE in Portable Mode. When running in Portable Mode, all the configurations and software components downloaded goes to inside the main Arduino IDE folder, into another folder called "portable". In this way, this installation (folder) is COMPLETELLY INDEPENDENT, holding everything inside the main folder. As a result it is possible to have MULTIPLES installations with different software components and configurations, and there will be NO CONFLICTS between them! The portable mode is also good because it provides a way for the user to carry to folder to any computer (of the same platform) and run it without any need of normal installation process, this is the meaning of the "portable" in portable-mode. For our little tutorial, the Portable Mode help us to have a complete insulated and independent installation to use different versions of software components, or to do tests, etc... so that your own Arduino IDE installation will NOT BE AFFECTED! It is very simple to run Arduino IDE in Portable Mode, we just need to create an empty folder with the name "portable", inside de main Arduino IDE folder. For example: after downloading the file "arduino-1.8.13-linuxarm.tar.xz" from the Arduino website, and extracting it with the command (linux): tar -xf arduino-1.8.13-linuxarm.tar.xz The software is extracted and place inside a new folder "arduino-1.8.13". -rw-r--r-- 1 pi pi 96485196 Jun 16 2020 arduino-1.8.13-linuxarm.tar.xz drwxr-xr-x 10 pi pi 4096 Jun 16 2020 arduino-1.8.13 Inside the folder "arduino-1.8.13", we have the following: arduino-1.8.13 ├── examples/ ├── hardware/ ├── java/ ├── lib/ ├── libraries/ ├── reference/ ├── tools/ ├── tools-builder/ ├── arduino* ├── arduino-builder* ├── arduino-linux-setup.sh* ├── install.sh* ├── revisions.txt └── uninstall.sh* "/" is folder, "*" is executable Now, lets go inside this main folder and create the "portable" folder: #go inside the folder cd arduino-1.8.13/ #create a new empty folder with the name "portable" mkdir portable Now, we have the following: arduino-1.8.13 ├── examples/ ├── hardware/ ├── java/ ├── lib/ ├── libraries/ ├── portable/ ├── reference/ ├── tools/ ├── tools-builder/ ├── arduino* ├── arduino-builder* ├── arduino-linux-setup.sh* ├── install.sh* ├── revisions.txt └── uninstall.sh* "/" is folder, "*" is executable Then, if we run the "arduino" executable: ./arduino we should have Arduino IDE in Portable Mode! [IDE picture] Shutdown the IDE, so that we can go to the next step. This first step, Portable Mode, is not directly related with the use of components in different versions, in fact, IT IS A COMPLETELY EXTRA step here, it is not a necessary step. The problem is that, we need portable mode, otherwise we will end up creating conflicts (many of them) trying to use Arduino IDE with different versions of software... with Portable Mode, we can have as MANY Arduinos IDEs as we want, all independent from each other, that is the reason we include the instruction here! STEP 2: Download "arduino-esp32" framework/library and place it inside your custom Arduino IDE folder. Now lets look again inside the main folder, there is a folder called "hardware"... arduino-1.8.13 ├── hardware/ ... "/" is folder We will install an older version of the "arduino-esp32" software inside this "hardware" folder... Before downloading and installing the component, we need to create a new folder inside the "hardware" folder, with the name "espressif". #enter the folder "hardware" cd hardware #create the "espressif" folder mkdir espressif We have something like this: arduino-1.8.13/ └── hardware/ └── espressif/ We can use either METHOD 1 - git commands, or METHOD 2 - manual zip download. Use METHOD 1 Git command is needed, if you don't have it installed, need to install: #if you need to install git sudo apt install git #download the current version of the software (latest) git clone esp32 #enter the "esp32" folder cd esp32 #revert, go back to the version (commit) at 2021/FEB/22 git checkout 560c0f45f58b907f0d699f65408b87fe54650854 #you can find a full list of versions (commits) here: Or, use METHOD 2 Using the web browser, go to: [commits URL] Choose a version (commit) to download. For this little tutorial, we choose the 2021/FEB/22 version (commit): [2021/FEB/22 commit URL] Download as zip file and extract it. Then, create a new folder "esp32" inside the folder "espressif". We need as shown below: arduino-1.8.13/ └── hardware/ └── espressif/ └── esp32/ All the files and folders that was extract from the zip file, need to be placed inside the folder "esp32". The following is what I do have inside the "esp32": arduino-1.8.13/hardware/espressif/esp32/ ├── boards.txt ├── CMakeLists.txt ├── component.mk ├── cores/ ├── docs/ ├── Kconfig.projbuild ├── libraries/ ├── LICENSE.md ├── Makefile.projbuild ├── package/ ├── package.json* ├── platform.txt ├── programmers.txt ├── README.md ├── tools/ └── variants/ Both METHOD 1 and METHOD 2 should have a folder called "tools", inside the folder "esp32", which is inside "espressif", which is inside "hardware". Like the following: arduino-1.8.13/hardware/espressif/esp32/tools/ We need to be inside this "tools" folder to run the following command: #if you are inside "espressif" folder run: cd esp32/tools #or, if you are inside the "esp32" folder, run: cd tools #then (when inside the 'tools' folder), run: python3 get.py The previous command (python3 get.py) download and extract the tools (compiler, etc), needed to build our sketches into binary form. We now can go back to main "arduino-1.8.13/" folder and run: ./arduino If you are already running the IDE, you need to restart it! Step 3: What Kind of Target Board to Upload to? The M5Stack UNITCAM uses a ESP32-WROOM chip, so, we can use the "ESP32 Dev Module", with 2 observations to use it with UNITCAM: [pict target-board] Or, if you prefer, add a new definition, something like the following to the file "boards.txt"..., then M5 UnitCam will appear on the target board menu... On our Portable Mode IDE example, this file is located in: arduino-1.8.13/hardware/espressif/esp32 M5Stack-ESP32-UnitCam.name=M5StackUnitCam M5Stack-ESP32-UnitCam.upload.tool=esptool_py M5Stack-ESP32-UnitCam.upload.maximum_size=3145728 M5Stack-ESP32-UnitCam.upload.maximum_data_size=327680 M5Stack-ESP32-UnitCam.upload.flags= M5Stack-ESP32-UnitCam.upload.extra_flags= M5Stack-ESP32-UnitCam.upload.speed=115200 M5Stack-ESP32-UnitCam.serial.disableDTR=true M5Stack-ESP32-UnitCam.serial.disableRTS=true M5Stack-ESP32-UnitCam.build.tarch=xtensa M5Stack-ESP32-UnitCam.build.bootloader_addr=0x1000 M5Stack-ESP32-UnitCam.build.target=esp32 M5Stack-ESP32-UnitCam.build.mcu=esp32 M5Stack-ESP32-UnitCam.build.core=esp32 M5Stack-ESP32-UnitCam.build.variant=esp32 M5Stack-ESP32-UnitCam.build.board=ESP32_DEV M5Stack-ESP32-UnitCam.build.flash_size=4MB M5Stack-ESP32-UnitCam.build.partitions=huge_app M5Stack-ESP32-UnitCam.build.defines= M5Stack-ESP32-UnitCam.build.extra_libs= M5Stack-ESP32-UnitCam.build.code_debug=0 M5Stack-ESP32-UnitCam.menu.PSRAM.disabled=Disabled M5Stack-ESP32-UnitCam.menu.PSRAM.disabled.build.defines= M5Stack-ESP32-UnitCam.menu.PSRAM.disabled.build.extra_libs= M5Stack-ESP32-UnitCam.menu.PSRAM.enabled=Enabled M5Stack-ESP32-UnitCam.menu.PSRAM.enabled.build.defines=-DBOARD_HAS_PSRAM -mfix-esp32-psram-cache-issue -mfix-esp32-psram-cache-strategy=memw M5Stack-ESP32-UnitCam.menu.PSRAM.enabled.build.extra_libs= M5Stack-ESP32-UnitCam.menu.CPUFreq.240=240MHz (WiFi/BT) M5Stack-ESP32-UnitCam.menu.CPUFreq.240.build.f_cpu=240000000L M5Stack-ESP32-UnitCam.menu.CPUFreq.160=160MHz (WiFi/BT) M5Stack-ESP32-UnitCam.menu.CPUFreq.160.build.f_cpu=160000000L M5Stack-ESP32-UnitCam.menu.CPUFreq.80=80MHz (WiFi/BT) M5Stack-ESP32-UnitCam.menu.CPUFreq.80.build.f_cpu=80000000L M5Stack-ESP32-UnitCam.menu.CPUFreq.40=40MHz (40MHz XTAL) M5Stack-ESP32-UnitCam.menu.CPUFreq.40.build.f_cpu=40000000L M5Stack-ESP32-UnitCam.menu.CPUFreq.26=26MHz (26MHz XTAL) M5Stack-ESP32-UnitCam.menu.CPUFreq.26.build.f_cpu=26000000L M5Stack-ESP32-UnitCam.menu.CPUFreq.20=20MHz (40MHz XTAL) M5Stack-ESP32-UnitCam.menu.CPUFreq.20.build.f_cpu=20000000L M5Stack-ESP32-UnitCam.menu.CPUFreq.13=13MHz (26MHz XTAL) M5Stack-ESP32-UnitCam.menu.CPUFreq.13.build.f_cpu=13000000L M5Stack-ESP32-UnitCam.menu.CPUFreq.10=10MHz (40MHz XTAL) M5Stack-ESP32-UnitCam.menu.CPUFreq.10.build.f_cpu=10000000L M5Stack-ESP32-UnitCam.menu.FlashMode.qio=QIO M5Stack-ESP32-UnitCam.menu.FlashMode.qio.build.flash_mode=dio M5Stack-ESP32-UnitCam.menu.FlashMode.qio.build.boot=qio M5Stack-ESP32-UnitCam.menu.FlashMode.dio=DIO M5Stack-ESP32-UnitCam.menu.FlashMode.dio.build.flash_mode=dio M5Stack-ESP32-UnitCam.menu.FlashMode.dio.build.boot=dio M5Stack-ESP32-UnitCam.menu.FlashMode.qout=QOUT M5Stack-ESP32-UnitCam.menu.FlashMode.qout.build.flash_mode=dout M5Stack-ESP32-UnitCam.menu.FlashMode.qout.build.boot=qout M5Stack-ESP32-UnitCam.menu.FlashMode.dout=DOUT M5Stack-ESP32-UnitCam.menu.FlashMode.dout.build.flash_mode=dout M5Stack-ESP32-UnitCam.menu.FlashMode.dout.build.boot=dout M5Stack-ESP32-UnitCam.menu.FlashFreq.80=80MHz M5Stack-ESP32-UnitCam.menu.FlashFreq.80.build.flash_freq=80m M5Stack-ESP32-UnitCam.menu.FlashFreq.40=40MHz M5Stack-ESP32-UnitCam.menu.FlashFreq.40.build.flash_freq=40m If you changed the "boards.txt" file, you need to restart the IDE... DONE, this is it! [MANY arduino portable picture] Step 4: running a sample The UNITCAM does not have PSRAM, so it will NOT work with higher resolution images, but, for smaller resolutions it work OK... A MINIMALIST PROOF OF CONCEPT DEMO (myCamSketch.ino) The following code is a full/complete working demo Arduino Sketch. Or course, DOES NOT with current/latest (2021/DEC) version of the Arduino-Esp32 Framework, you need to install an old version, as described in this text above. The little demo only capture frames and show text messages (on the serial) of success and what is the size of the picture... #include "esp_camera.h" // Pin Map for M5Stack UnitCam void setup() { Serial.begin(115; config.jpeg_quality = 20; config.frame_size = FRAMESIZE_CIF; config.fb_count = 1; // Initialize the Camera esp_err_t err = esp_camera_init(&config); if (err != ESP_OK) { Serial.printf("Camera Initialization Fail. Error 0x%x", err); return; } } void loop() { camera_fb_t * fb = NULL; while (true) { // Wait 5 Seconds before "take" picture delay(5000); // "Take" a Picture fb = esp_camera_fb_get(); if (!fb) { Serial.println("Camera ERROR, unable to capture..."); return; } Serial.println("Picture Capture Success!"); Serial.printf("Size is %d \n", fb->len); esp_camera_fb_return(fb); } } Here is the output: [picture serial output] It is nice to see that, for frames JPEG, it is also possible to get XGA, 1024x768 resolution... My M5 account here is new, I do not have permission to upload zip files, so will upload a more complete demo/sample to GitHub... one that allow us to see the image that was captured... I will post here later the URL... [MY Motivation] What motivates me to have done this post and the other about the ESP-IDF (using many different versions of code), goes far beyond the camera, wifi and ESP32 devices... the basic ideas touched here relating to using different versions of source code APPLIES TO EVERYTHING IN SOFTWARE, and having a little practical understanding of the subject is a knowledge worth spending time to get... If we use "language from statistics", from all the RICH SET of tutorials, sample code, and software projects that works, we end up benefiting ONLY from about 20%, and the remaining 80% are, for some reason, not able to work in the set of tools, hardware and setups that individualy we have at our hands... Every individual or team, when building a software project, a tutorial/sample, is using some specific versions of software libraries, specific versions of tools and specific versions of operating systems, etc... when we try to use such code/project/sample with different set of libs, tools and OSes, not always thing works as expected... In fact, I think that we can use the 80/20 Principle here, in reference, a kind of analogy/approximation, and say that we are all using ONLY 20% of code (project, samples, tutorials), when, the true potential is somewhere around 80% (100% will be difficult to achieve)... My point here is that, some of the reasons (factors) for the "20%" are known to us, and there are things we can do about it... so, we can walk towards the "80%"... and, at least some part of "these some reasons" are relative easy to deal with... [picts] Ok, combined, the 2 posts became sizeble, but, it is really what I was motivated to write about... Also, I just want to say that this little M5Stack UnitCam fullfil my expectations, it is very simple, cheap, and deliver what I expected from it... nice. About a month ago I got my 2 first M5 products: UnitCam and ESP32-C3 Mate, and decide to start with the UnitCam... also, it was my first attempt to work with camera devices on ESP32... and, because of the little bug, I was kind of "lost" in the first few attempts (with Arduino IDE)... Since M5 Company is more focused on the UIFlow Programming Environment, people that want to use Arduino or IDF have less doc material to start with... I found MANY tutos/samples regarding esp32-cam devices, but, FOUND NONE showing the M5-UnitCam working with Arduino IDE or ESP-IDF, with source code available and a clear statement that it is done in such and such way... So, I end up doing these 2 text/post... including the more broader category inside which the issue falls, relating to the complexity of using different versions of software that are mode of different versions of soft-components... Lots of text, maybe there is some typo or bad statement somewhere, if you find some, tell me so that I can correct them... M5-UnitCam is a cool device, using UIFlow, ArduinoIDE and ESP-IDF should unleash its great value... Hope this help other UnitCam users. Regards all, Valter 2021/DEC/04 Japan)" Arduino IDE v1.8.13 (ARM 32Bits) Using old Arduino-esp32 commit (version of 2021/FEB/22) I was unable to upload the sample in zip format, there is a message like "not have privileges to this action"... I will upload this sample as well as another with Arduino IDE to GitHub and will place the link here... Valter For M5Stack UNITCAM and ESP-IDF Programming Framework, I did post infos here: Next, I will show how to do the same for Arduino IDE Framework... Valter Hi, this post is a follow up from a previous post on the subject of using UnitCam with ESP-IDF). Hi, just want to address users of the M5Stack UNITCAM which are trying or want to try the Arduino IDE Framework, or the ESP-IDP Framework. There is a bug introduced somewhere this year (2021) in the "ESP32-camera" Software, which causes all ESP32 CAMERA DEVICES WITHOUT PSRAM to fail when executing the camera initialization code... The problem is created during the compilation procedure, where PSRAM is ALWAYS selected, independent of the setup that the user does, and independent if the device has or do not have PSRAM... So, the binary code ALWAYS try to use PSRAM, causing a CRASH in the execution... Which may produce an error message like this: [error message] E (127) cam_hal: cam_dma_config(280): frame buffer malloc failed E (127) cam_hal: cam_config(364): cam_dma_config failed E (127) camera: Camera config failed with error 0xffffffff Camera init failed with error 0xffffffff This problem is reported here on the ESP32-Camera Github development website: [issue on GitHub] In the beginning, after trying several different samples, and all of them giving the same error message, I started to think that my unit has a hardware problem with the camera... but, then I saw the Github issue report, which makes perfect sense for my case... I then downloaded older versions of the ESP32-Camera software and test with success in the first attempt. First I did with ESP-IDF Programming Framework, success! Then, I search on how to use older soft/libraries with the Arduino IDE, found the information on the Espressif website and tried old version (about the same time/day of the IDF code I had selected), and it WORKED OK! [How to install esp32 on Arduino - Espressif site] So, a temporary solution for the problem is: DOWNLOAD AN OLD VERSION (COMMIT) OF THE ESP32-CAM, it was working OK in the past... I did not check WHEN, exactly, the bug was introduced, which I think is somewhere between FEBRUARY and AUGUST of 2021... For me, 2021/FEB/22 versions works OK... This problem seems very simple and easy to correct, so we must expect that in few days a solution is announced on the ESP32-Camera Github development website... I am writing 2 little texts explaining how to use old version of the ESP32-Camera. The first is about ESP-IDF Framework, the second Arduino IDE, and I will post/publish here... Knowing how to use different versions of the code is VERY helpful in many circumstances, and can give the user another level of understanding when using code samples or when trying to create/modify code, so I think that it is worth to know about. There are thousands of good projects/samples/tutorials on Internet, and many of them will work fine and well, ONLY when we use old versions of some software components, so, learning how to use old versions (or different) can be interesting... In this present post I just want to inform UNICAM users that, for ESP-IDF and Arduino IDE, using the camera may return errors like the one in the top of this post, and what the reason is... so there is an easy solution, which is: USE OLD VERSIONS OF THE CODE! Next posts I will explain the details about using old versions (also called commits) of the code, for IDF and for Arduino IDE. Regard all, Val! About old versions of ESP32-Arduino on Arduino IDE... Yes, it is possible and EASY to install old versions... This official guide describe how to install the current version: The only change needed is to download a diffent/old version of the code... I did a test going back to 2021/FEB/23, and it WORKS! (This is the same day I had choosed to test ESP32-camera software for the ESP-IDF, so, I was hoping that it was OK for Arduino IDE setup as well...) hi, I also experienced the same kind of error trying a M5Stack UnitCam (no PSRAM), using Arduino IDE Platform. Here is the error msg: E (127) cam_hal: cam_dma_config(280): frame buffer malloc failed E (127) cam_hal: cam_config(364): cam_dma_config failed E (127) camera: Camera config failed with error 0xffffffff Camera init failed with error 0xffffffff I found that there is a bug on the "esp32-camera" software, which is preventing the code to correctly work on devices WITHOUT PSRAM! See the issue here: In the case of the UnitCam, I was able to confirm that going back to 2021/FEB code (esp32-camera), it works perfectly, without problems, on ESP-IDF Dev Framework... So, now I have it working OK. I believe that this is the reason why the Arduino IDE Framework also fails for the UnitCam (and also, for any other CAM without PSRAM), since it is probably using the latest version of the code, which have the bug in it... I don't know how to "install" the old version of the esp32-camera on the Arduino IDE, besides I am asking myself if this is possible/easy to do, because I do have interest in using it with Arduino IDE as well... I will post more infos regarding the UnitCam and this bug later... The bug itself does not seems to be difficult to resolve, so, hopefully the solution will come in the next few days... So, the example inside the Arduino IDE and all the other tutorials/samples that exist on the Internet, probably WORKS OK on devices without PSRAM... what is causing the trouble is the bug introduced this year (2021) on the code, code that was OK in the past months/years... Hope it can help, Valter
https://forum.m5stack.com/user/valter-fukuoka
CC-MAIN-2022-40
en
refinedweb
This is a port of Easy debug text. Table of contents About The following code is a class, for use with the OgreDebugPanel {LEX()}overlay{LEX} that is added into all the demos and sample applications. The reason for creating it was to be able to change the debug text at different points in the code. If you do something like this... // this must be changed for C# code: mWindow->setDebugText(mWindow->getDebugText() + "new text"); ...you end up with a never ending line of text that would keep growing and slowly kill your application. Maybe the code has teething problems. If you need help or added improvements, please tell us in this thread. You will notice that in the actual printText() method I also couldn't find the setDebugText() method on the RenderWindow class, so I have put a line in there to write the debug text to the Visual Studio output window. You may have your own way of displaying debug information in your own project so just change that line. Also the sceneManager and window objects aren't actually used, so they can be completely removed from this class, but i left them in incase you wanted to expand this class. Source code using System; using System.Text; using System.Collections.Generic; using Mogre; namespace your_projects_namespace { public class DebugWriter { SceneManager sceneManager; RenderWindow window; List<string> debugLines; public DebugWriter(SceneManager _sceneManager, RenderWindow _window) { sceneManager = _sceneManager; window = _window; //create empty list of strings debugLines = new List<string>(); } public void addDebugText(string text) { //simply add string to our current list debugLines.Add(text); } public void printText() { string output = ""; //loop through each string in the list and join them together foreach (string line in debugLines) output += line + ", "; //output debug text // I didn't found a member like this... ?? //window.setDebugText(output); System.Diagnostics.Debug.WriteLine(output); debugLines.Clear(); } } } See also - MOGRE MovableText by Billboards - shows text, clamped to a SceneNode - MOGRE MovableText - shows text, clamped to a {LEX()}SceneNode{LEX} - MovableTextOverlay - shows text, clamped to a SceneNode - Simple text in MOGRE - shows text independent of a SceneNode - SpriteManager2d | MOGRE SpriteManager2d - code snippet - OgreSprites - similar, but without use of Billboard - ManualObject 2D - {LEX()}Overlay{LEX} - {LEX()}Billboard{LEX} - {LEX()}GUI{LEX} - several gui systems Alias: Easy debug text MOGRE
https://wiki.ogre3d.org/Easy+Debug+Text+for+MOGRE
CC-MAIN-2022-40
en
refinedweb
You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org Incidents/2019-01-10 WDQS Summary WDQS update process started failing on wdqs1007 and shortly after on wdqs1008. No further updates were possible though read queries proceeded normally. Further details on Phabricator: Timeline - Jan 7 22:57:10 wdqs1007 started producing errors on writes - Jan 8 around 03:00:00 wdqs1008 started producing errors on writes - Jan 8 the cause identified as Blazegraph allocator limit (see below) - Jan 9 12:02:42 wdqs1007 and wdqs1008 database was reloaded from wdqs1004 and full functionality restored Conclusions Blazegraph has a limit of 256K allocators (see below), which has been hit on these database. It is unclear why these databases hit it while others did not, but due to the way that Blazegraph allocates things (e.g. literal/term dictionary entries are never removed even if triples containing them are deleted) it may depend on the workload and previous history. If the database at the limit for allocators, it will not be able to create new memory allocations, and thus writes will fail. We need to readjust how we use the database to preclude exhausting the number of allocators. Links to relevant documentation - Discussion on Blazegraph issue tracker: - - Phabricator umbrella task: Actionables - We need to take some measures to ensure we're not hitting allocator limits now or in the future: Done categories namespace has non-trivial size, splitting it to separate database (Blazegraph instance) would probably free up some allocations and thus alleviate allocator problem for a while - We are using so called "raw record" mode, which means memory for literals is allocated directly. This consumes a lot of allocators. Turning it off would allocate literals together with index pages, reducing allocator numbers. - We may consider inlining more URIs - such as values and references - which switch them from allocated space to in-index space. While it may not be significant savings in terms of storage space, it would make them not consume allocators either. Done We may want to add some monitoring to allocator counts (available under status?dumpJournal) to ensure that when we're nearing a dangerous place we at least get warned about it. Done wdqs1006 also has low number of available allocators, we may want to reload its database as well.
https://wikitech-static.wikimedia.org/w/index.php?title=Incidents/2019-01-10_WDQS&oldid=588392
CC-MAIN-2022-40
en
refinedweb
#include <rte_eth_ctrl.h> A structure used to get the information of flow director filter. It supports RTE_ETH_FILTER_FDIR with RTE_ETH_FILTER_INFO operation. It includes the mode, flexible payload configuration information, capabilities and supported flow types, flexible payload characters. It can be gotten to help taking specific configurations per device. Best effort spaces. Flex bitmask unit in bytes. Size of flex bitmasks should be a multiply of this value. Flex payload configuration information Maximum src_offset in bytes allowed. It indicates that src_offset[i] in struct rte_eth_flex_payload_cfg should be less than this value. Flexible payload unit in bytes. Size and alignments of all flex payload segments should be multiplies of this value. Bit mask for every supported flow type. Guaranteed spaces. Max supported size of flex bitmasks in flex_bitmask_unit Max number of flexible payload continuous segments. Each segment should be a multiple of flex_payload_unit. Total flex payload in bytes. Flow director mode
https://doc.dpdk.org/api-2.0/structrte__eth__fdir__info.html
CC-MAIN-2022-40
en
refinedweb
vmod_saintmode - Man Page Saint mode backend director Synopsis import saintmode [as name] [from "path"] VOID blacklist(DURATION expires) STRING status() new xsaintmode = saintmode.saintmode(BACKEND backend, INT threshold) BACKEND xsaintmode.backend() INT xsaintmode.blacklist_count() BOOL xsaintmode.is_healthy() Description This VMOD provides saintmode functionality for Varnish Cache 4.1 and newer. The code is in part based on Poul-Henning Kamp's saintmode implementation in Varnish 3.0. Saintmode. Saintmode in Varnish 4.1 is implemented as a director VMOD. We instantiate a saintmode object and give it a backend as an argument. The resulting object can then be used in place of the backend, with the effect that it also has added saintmode capabilities. Any director will then be able to use the saintmode backends, and as backends marked sick are skipped by the director, this provides a way to have fine grained health status on the backends, and making sure that retries get a different backend than the one which failed. Example: vcl 4.0; import saintmode; import directors; backend tile1 { .host = "192.0.2.11"; .port = "80"; } backend tile2 { .host = "192.0.2.12"; .port = "80"; } sub vcl_init { # Instantiate sm1, sm2 for backends tile1, tile2 # with 10 blacklisted objects as the threshold for marking the # whole backend sick. new sm1 = saintmode.saintmode(tile1, 10); new sm2 = saintmode.saintmode(tile2, 10); # Add both to a director. Use sm0, sm1 in place of tile1, tile2. # Other director types can be used in place of random. new imagedirector = directors.random(); imagedirector.add_backend(sm1.backend(), 1); imagedirector.add_backend(sm2.backend(), 1); } sub vcl_backend_fetch { # Get a backend from the director. # When returning a backend, the director will only return backends # saintmode says are healthy. set bereq.backend = imagedirector.backend(); } sub vcl_backend_response { if (beresp.status >= 500) { # This marks the backend as sick for this specific # object for the next 20s. saintmode.blacklist(20s); # Retry the request. This will result in a different backend # being used. return (retry); } } VOID blacklist(DURATION expires) Marks the backend as sick for a specific object. Used in vcl_backend_response. Corresponds to the use of beresp.saintmode in Varnish 3.0. Only available in vcl_backend_response. Example: sub vcl_backend_response { if (beresp.http.broken-app) { saintmode.blacklist(20s); return (retry); } } STRING status() Returns a JSON formatted status string suitable for use in vcl_synth. sub vcl_recv { if (req.url ~ "/saintmode-status") { return (synth(700, "OK")); } } sub vcl_synth { if (resp.status == 700) { synthetic(saintmode.status()); return (deliver); } } Example JSON output: { "saintmode" : [ { "name": "sm1", "backend": "foo", "count": "3", "threshold": "10" }, { "name": "sm2", "backend": "bar", "count": "2", "threshold": "5" } ] } new xsaintmode = saintmode.saintmode(BACKEND backend, INT threshold) new xsaintmode = saintmode.saintmode( BACKEND backend, INT threshold ) Constructs a saintmode director object. The threshold argument sets the saintmode threshold, which is the maximum number of items that can be blacklisted before the whole backend is regarded as sick. Corresponds with the saintmode_threshold parameter of Varnish 3.0. Example: sub vcl_init { new sm = saintmode.saintmode(b, 10); } BACKEND xsaintmode.backend() Used for assigning the backend from the saintmode object. Example: sub vcl_backend_fetch { set bereq.backend = sm.backend(); } INT xsaintmode.blacklist_count() Returns the number of objects currently blacklisted for a saintmode director object. Example: sub vcl_deliver { set resp.http.troublecount = sm.blacklist_count(); } BOOL xsaintmode.is_healthy() Checks if the object is currently blacklisted for a saintmode director object. If there are no valid objects available (called from vcl_hit or vcl_recv), the function will fall back to the backend's health function.
https://www.mankier.com/3/vmod_saintmode
CC-MAIN-2022-40
en
refinedweb
Rich logging messages. More... Rich logging messages. CAFFE_ENFORCE_THAT can be used with one of the "checker functions" that capture input argument values and add it to the exception message. E.g. CAFFE_ENFORCE_THAT(Equals(foo(x), bar(y)), "Optional additional message") would evaluate both foo and bar only once and if the results are not equal - include them in the exception message. Some of the basic checker functions like Equals or Greater are already defined below. Other header might define customized checkers by adding functions to caffe2::enforce_detail namespace. For example: namespace caffe2 { namespace enforce_detail { inline EnforceFailMessage IsVector(const vector<int64_t>& shape) { if (shape.size() == 1) { return EnforceOK(); } return c10::str("Shape ", shape, " is not a vector"); } }} With further usages like CAFFE_ENFORCE_THAT(IsVector(Input(0).dims())) Convenient wrappers for binary operations like CAFFE_ENFORCE_EQ are provided too. Please use them instead of CHECK_EQ and friends for failures in user-provided input.
https://caffe2.ai/doxygen-c/html/namespacec10_1_1enforce__detail.html
CC-MAIN-2022-40
en
refinedweb
From Documentation The main focus of ZK 3.6.1 was bug-fixing, with over 47 bugs fixed. In addition to the host of bugs eradicated 20 new features have been added, including a debug mode for unit-testing and MVC enhancements! In the following paragraphs, I'll introduce the most exciting new additions to ZK 3.6.1. Ease of use An intuitive way to access the Composer directly Using ZK 3.6.1, the developer has easier access to the Composer by using a generic variable called ($composer) instead of the Java Class name (eg.$MyComposer). For example, <window id="win" apply="MyComposer"> <grid model="${win$composer.model}"/> </window> instead of, <grid model="${win$MyComposer.model}"/> Developers can define required resources inside the Composer and access them easier. An easy way to scroll to a specific UI component ZK 3.6.1 provides an easier way to scroll to a specific UI component within a container. This is achieved by using the Clients.scrollIntoView() function. In the following example, the vbox will scroll to the T1 label if the user clicks the button. <div height="100px" width="50px" style="overflow:auto"> <vbox> <label value="A"/> <label value="B"/> <label value="C"/> <label value="D"/> <label value="E"/> <label value="F"/> <label value="G"/> <label value="H"/> <label id="T1" value="I"/> </vbox> </div> <button label="scrollToT1" onClick="Clients.scrollIntoView(T1)"/> Component reloaded Datebox format enhancement Since ZK 3.6.1, the format of datebox supports is enhanced, users can specify hour, and minute. Moreover, more Java format are supported, including yyyy/MM/dd-HH:mm,yyyy/MM/dd-kk:mm, yyyy/MM/dd-K:mm a, and etc. <datebox format="yyyy/MM/dd HH:mm a" cols="25" /> A way to specify the position of the Popup component ZK 3.6.1 provides the method popup.open(componentid, relative position) to specify the position of a popup component. The function's second argument takes a relative position, a list of 11 possible positions are provided below. The following illustrates the simplicity of usage, <popup id="pp"> Here is popup </popup> <button label="before_start" onClick='pp.open(self, "before_start");' /> Upon clicking the button the popup component will appear in the relative position specified. In this case the position is just above the button. Native namespace support for zkhead ZK 3.6.1 now provides support for zkhead which allows the user to specify the position of ZK’s default JS and CSS files. This is particularly useful if you want the JS and CSS files loaded in a specific order. For example, <html xmlns="" xmlns: <head> <title>Native Complete Page</title> <meta http- <zkhead/> </head> Databinding enhancements Using the "load-after" descriptive to delay loading behavior The Databinding manager will load data from the Model and then supply the UI with the relevant information. This happens before event listener processes, using the load-when descriptive, are loaded. It would be more practical to load the processes defined within the event listener first as the Model is updated by said listeners. It is now possible to implement this by utilizing the descriptive load-after. <vbox> <label id="lb1" value="@{tar.value, load-after=btn.onClick}"/> <label id="tar" value="john"/> <button id="btn" label="try" onClick='tar.setValue("mary")'/> </vbox> Once, user clicks the button, the order of processing is as follows, - The process defined within event listener is executed. (tar.setValue("mary")) - The loading behavior of Databinding is executed. (lbl.value = tar.value) Using the "save-after" descriptive to delay saving behavior The Databinding manager will access the UI’s data and pass it to the Model for storage or processing. This happens before event listener processes using the save-when descriptive are loaded. In most cases, this satisfies most developers' requirements as we usually want to retrieve users’ input data before processing it. However, if developers would like to delay the saving of data until after the invocation of the event listener they can utilize the save-after descriptive instead of save-when as follows. Advanced features Use the Component ID as a UUID for unit test For client-side debugging, developers must generate a predictable component id allowing client-side unit testing libraries access to the UI components. This is made easier by ZK 3.6.1 as ZK can now automatically generate UUIDs (Universal Unique Identifiers) by using the component id rather than a randomly generated UUID. Simply define the following configuration in zk.xml, <desktop-config> <id-to-uuid-prefix>_zid_</id-to-uuid-prefix> </desktop-config> Then, the following component's UUID will be _zid_foo. <textbox id="foo"/> For more information, please refer to How to Test ZK Application with Selenium A method of monitoring the generation of children components using the FullComposer In previous versions methods defined in Composer and ComposerExt were only called when the associated component is generated. It would be more convenient if these methods were also called when their children were generated. Using ZK 3.6.1 this is now possible by implementing FullComposer. If implemented alongside Composer and ComposerExt, the Composer methods will now be called not only by the top level component, but by its’ children as well. For example, in the following listing, methods defined in Composer1 will be called while generating not only the div component, but the lb1 and lb2 components as well. <window border="normal"> <zscript><![CDATA[ <label id="lb1" value="b_2_1"/> <label id="lb2" value="b_2_2"/> </div> </window>
https://www.zkoss.org/wiki/Small%20Talks/2009/April/New%20Features%20of%20ZK%203.6.1
CC-MAIN-2018-47
en
refinedweb
Time for a new snapshot. With the (more or less) completion of java.nio.file the release is getting closer. There are still some minor issues, but the bulk of the work is now complete. Changes: Binaries available here: ikvmbin-7.0.4296.zip Time for a new snapshot. Binaries available here: ikvmbin-7.0.4266.zip The release notes for IKVM.NET have always said "Not implemented" for java.lang.management and javax.management. This was mostly due to the fact that I don't know very much about this area of Java and it doesn't make a lot of sense to use Java management tools when the equivalent .NET management tools are probably a better fit (at least for VM level operations). This week, prompted by a question on the ikvm-developers list, I decided to look into improving the situation (a bit). As a result it is now possible to get the platform MBean server and to connect to it with the jconsole application. To start the server run the following code: java.lang.System.setProperty("com.sun.management.jmxremote", "true");java.lang.System.setProperty("com.sun.management.jmxremote.authenticate", "false");java.lang.System.setProperty("com.sun.management.jmxremote.ssl", "false");java.lang.System.setProperty("com.sun.management.jmxremote.port", "9999");sun.management.Agent.startAgent(); Now when you start jconsole in the following way, you can connect to localhost:9999 jconsole -J-Dcom.sun.management.jmxremote.ssl=false Note that the mechanism that jconsole uses to detect and connect to locally running JDK instances is very platform specific and is not supported. Note also that IKVM does not support "agents" , so you have to start the management agent explicitly by calling it directly. Limitations The information (and operations) exposed is pretty limited. I still maintain that using .NET specific management tools is a better solution, but if you have any specific scenario you want to see supported, please let me know and I'll consider it. Code If you want to play with it, the binaries are available here: ikvmbin-7.0.4261.zip Binaries available here: ikvmbin-7.0.4258.zip There's a working JSR292 implementation now. No optimization work has been done yet, the first step is to get things working. Binaries available here: ikvmbin-7.0.4245.zip using java.lang.invoke;class Program { static void Main() { MethodType mt = MethodType.methodType(typeof(void), typeof(string), typeof(object[])); MethodHandle mh = MethodHandles.lookup().findStatic(typeof(System.Console), "WriteLine", mt); mh.invoke("{0} {1}", "Hello", "World"); }} This now works, but it is not very efficient. Invoking a MethodHandle from Java is more efficient, because the call site signature is statically known in that case. You can also call invokeExact from C#, but that's even less usefull, because (unlike from Java) you can only call MethodHandles with the same signature as invokeExact. However, it is very fast, because it doesn't do any conversions. If there is demand for it, I'll consider adding a public API for getting the delegate from a MethodHandle. I've been working on JSR-292 and in particular MethodHandle support the past week. It's been fun and I only found a single CLR bug so far, so I guess that's not too bad. In the implementation of MethodHandle I use lots of delegates and DynamicMethods. When you generate invalid CIL for a DynamicMethod fun stuff happens, e.g. helpful exceptions, unhelpful exceptions, crashes or this interesting message: ====WARNING====You have probably encountered known bug VSW:137474, which fireswhen System.EnterpriseServices.Thunk.Proxy::LazyRegister is jitted.The bug often shows up in tests under ManagedServices\Interop.VSW:137474 has been fixed, but the fix has not yet been propagatedto Lab21S. Please check to see if the assert/AV occurs whilecompiling LazyRegister before entering a new bug for this failure.=============== The JIT just prints this to the console and continues on! The OpenJDK java.lang.invoke package tests now pass on my systems with only 3 failures and they are all well understood. The first is due to invokedynamic not being implemented yet and the other two due the fact that I have not yet implement full variable arity delegates. Currently there are about 44 delegates for the arities from 0 to 21 (unfortunately you can't use System.Void as a generic type parameter, so you need special ones for void signatures). Eventually I'll have fewer delegate types and use a tuple like value type to pack arguments together. The JVM only support 256 arguments so 8 x 8 x 8 should be enough. The code is still very rough, so it'll probably be at least another week before anything is ready to check in or release a development snapshot. Suppose.
http://weblog.ikvm.net/default.aspx?date=2011-10-07
CC-MAIN-2018-47
en
refinedweb
trying to accomplish:, I recognize that this does not really describe *why*acpi_os_prepare_sleep is necessary to begin with. For that, we need togo back a little more.The summary for the series that introduced it is a good description, ofthe reasons it is necessary: summary though - in the case of Xen (and I believe this is also truein tboot) a value inappropriate for a VM (which dom0 is a special caseof) was being written to cr3, and the physical machine would never comeout of S3.This mechanism gives an os specific hook to do something else down atthe lower levels, while still being able to take advantage of the largeamount of OS independent code in ACPICA.I hope that this helps to clear up matters.If not, I'm happy to go into greater detail on any point, or get othersinvolved if I cannot field the question appropriately.Thaks for your timeBen> > Thanks,> Bob> > >> -----Original Message----->> From: Ben Guthro [mailto:Benjamin.Guthro@citrix.com]>> Sent: Wednesday, July 24, 2013 6:23 AM>> To: Moore, Robert>> Cc: Zheng, Lv; Konrad Rzeszutek Wil 07/24/2013 09:18 AM, Moore, Robert wrote:>>> I have not looked closely at this, but we typically do things like this>> in ACPICA so that they only need to be implemented once to support all of>> the various acpica-hosted operating systems - linux, solaris, hp-ux,>> apple, freebsd, etc. -- even if they could be implemented "cleaner" in>> some way on any given host.>>>> Even when the resulting "simplification" results in reduced functionality?>>>> Maybe I am misunderstanding the suggestion...but it sounded like it was>> basically to mimic the traditional behavior, and mask out the reduced>> hardware capabilities on these system types.>>>> It seems to me that if the system supports the reduced hardware ACPI>> sleep, you would want to make use of it...>>>>>>>>>>>>>>>>>>> -----Original Message----->>>> From: Ben Guthro [mailto:Benjamin.Guthro@citrix.com]>>>> Sent: Wednesday, July 24, 2013 5:01 AM>>>> To: Zheng, Lv>>>> Cc: Konrad Rzeszutek Wilk; Jan Beulich; Rafael J . Wysocki; linux->>>> kernel@vger.kernel.org; linux-acpi@vger.kernel.org; xen->>>> devel@lists.xen.org; Moore, Robert>>>> Subject: Re: [PATCH v3 1/3] acpi: Call acpi_os_prepare_sleep hook in>>>> reduced hardware sleep path>>>>>>>>>>>>>>>> On 07/24/2013 02:24 AM, Zheng, Lv wrote:>>>>> Hi,>>>>>>>>>> Sorry for the delayed response.>>>>>>>>>>> From: Ben Guthro [mailto:Benjamin.Guthro@citrix.com]>>>>>> Sent: Tuesday, July 02, 2013 7:43 PM>>>>>>>>>>>>>>>>>> On 07/02/2013 02:19 AM, Zheng, Lv wrote:>>>>>>> Thanks for your efforts!>>>>>>>>>>>>>> I wonder if it is possible to remove the argument - "u8 extended">>>>>>> and convert>>>>>> "pm1a_control, pm1b_control" into some u8 values that are>>>>>> equivalent to "acpi_gbl_sleep_type_a, acpi_gbl_sleep_type_b" in the>>>>>> legacy sleep>>>> path.>>>>>>> It can also simplify Xen codes.>>>>>>>>>>>> Thanks for your time to review this.>>>>>>>>>>>> I'm not sure that this simplifies things. I think that, in fact, it>>>>>> would make them quite a bit more complicated, but perhaps I>>>> misunderstand.>>>>>>>>>>>> Is it not preferred to use the reduced hardware sleep, over the old>>>> method?>>>>>> While these register definitions may be equivalent below, doing the>>>>>> translation in linux, only to translate them back again at a lower>>>> layer seems unnecessary.>>>>>>>>>>>>>>>> Yes, it would require tboot layer to be able to be aware of how such>>>> fields locate in the PM registers.>>>>> So I think you can pass the register address of the field and the>>>>> field>>>> name/value pair to the tboot, this could simplify things, no lower>>>> layer effort will be needed.>>>>> Please don't worry about the case that a register field could be>>>>> split>>>> into PM1a and PM1b, it could be a hardware design issue.>>>>> IMO, one field should always be in one register, either PM1a or PM1b.>>>>> Or there could be hardware issues cannot be addressed by the ACPICA>>>> architecture (something like natural atomicity).>>>>> But maybe I'm wrong.>>>>>>>> Again, I don't think this simplifies things, but complicates them>>>> unnecessarily. Converting the reduced hardware sleep to the legacy>>>> sleep seems like it would be an unnecessary layer of translation.>>>>>>>> The interface now simply passes the information from ACPICA down to>>>> the lower layers (xen, tboot) - and then lets them worry about the>>>> reduced hardware implementation.>>>>>>>> FWIW, xen has shipped with this implemetation, and enterprise kernels>>>> using the traditional xen kernel (like Suse) are making use of it.>>>>>>>> It may benefit tboot, in this case, but not Xen.>>>>>>>> I personally see it as an undesirable complication.>>>>>>>> Best regards,>>>> Ben>>>>>>>>>>>>>> Thanks and best regards>>>>> -Lv>>>>>>>>>>> The hypervisor knows how to deal with both the reduced hardware>>>>>> sleep as well as the legacy sleep path - it merely need to>>>>>> distinguish these two paths, when performing the hypercall.>>>>>>>>>>>> Since there are two paths through the higher level ACPICA code ->>>>>> that in hwsleep.c, and hwesleep.c - there needs to be some>>>>>> distinction between the two paths, when calling through to the>>>>>> lower level>>>>>> acpi_os_prepare_sleep() call.>>>>>>>>>>>> An alternate method would be to create another interface named>>>>>> acpi_os_prepare_esleep() which would do the equivalent of this>>>>>> patch series, with an "extended" parameter hidden from upper level>>>> interfaces.>>>>>>>>>>>> This, however, would also add another function to>>>>>> include/acpi/acpiosxf.h - which, I thought was undesirable, in the>>>>>> impression that I got from Bob Moore, and Rafael Wysocki (though,>>>>>> please correct me on this point, if I have>>>>>> misunderstood)>>>>>>>>>>>> Best Regards>>>>>>>>>>>> Ben>>>>>>>>>>>>>>>>>>>> As in ACPI specification, the bit definitions between the legacy>>>>>>> sleep registers>>>>>> and the extended sleep registers are equivalent.>>>>>>>>>>>>>> The legacy sleep register definition:>>>>>>> Table 4-16 PM1 Status Registers Fixed Hardware Feature Status Bits>>>>>>> - WAK_STS(bit 15) Table 4-18 PM1 Control Registers Fixed Hardware>>>>>>> Feature Control Bits - SLP_TYPx (bit 10-12), SLP_EN (bit 13)>>>>>>>>>>>>>> The extended sleep register definition:>>>>>>> Table 4-24 Sleep Control Register - SLP_TYPx (3 bits from offset>>>>>>> 2), SLP_EN (1>>>>>> bit from offset 5), here 10-8 = 2, and 13-8 = 5, this definition is>>>>>> equivalent to Table 4-18.>>>>>>> Table 4-25 Sleep Status Register - WAK_STS (1 bit 7), 15-8 = 7,>>>>>>> this definition is>>>>>> equivalent to Table 4-16.>>>>>>>>>>>>>> Thanks and best regards>>>>>>> -Lv>>>>>>>>>>>>>>> -----Original Message----->>>>>>>> From: linux-acpi-owner@vger.kernel.org>>>>>>>> [mailto:linux-acpi-owner@vger.kernel.org] On Behalf Of Ben Guthro>>>>>>>> Sent: Wednesday, June 26, 2013 10:06 PM>>>>>>>> To: Konrad Rzeszutek Wilk; Jan Beulich; Rafaell J . Wysocki;>>>>>>>> linux-kernel@vger.kernel.org; linux-acpi@vger.kernel.org;>>>>>>>> xen-devel@lists.xen.org>>>>>>>> Cc: Ben Guthro; Moore, Robert>>>>>>>> Subject: [PATCH v3 1/3] acpi: Call acpi_os_prepare_sleep hook in>>>>>>>> reduced hardware sleep path>>>>>>>>>>>>>>>> In version 3.4 acpi_os_prepare_sleep() got introduced in parallel>>>>>>>> with reduced hardware sleep support, and the two changes didn't>>>>>>>> get>>>>>>>> synchronized: The new code doesn't call the hook function (if so>>>>>>>> requested). Fix this, requiring a parameter to be added to the>>>>>>>> hook function to distinguish "extended" from "legacy" sleep.>>>>>>>>>>>>>>>> Signed-off-by: Ben Guthro <benjamin.guthro@citrix.com>>>>>>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>>>>>>>>> Cc: Bob Moore <robert.moore@intel.com>>>>>>>>> Cc: Rafaell J. Wysocki <rjw@sisk.pl>>>>>>>>> Cc: linux-acpi@vger.kernel.org>>>>>>>> --->>>>>>>> drivers/acpi/acpica/hwesleep.c | 8 ++++++++>>>>>>>> drivers/acpi/acpica/hwsleep.c | 2 +->>>>>>>> drivers/acpi/osl.c | 16 ++++++++-------->>>>>>>> include/linux/acpi.h | 10 +++++----->>>>>>>> 4 files changed, 22 insertions(+), 14 deletions(-)>>>>>>>>>>>>>>>> diff --git a/drivers/acpi/acpica/hwesleep.c>>>>>>>> b/drivers/acpi/acpica/hwesleep.c index 5e5f762..6834dd7 100644>>>>>>>> --- a/drivers/acpi/acpica/hwesleep.c>>>>>>>> +++ b/drivers/acpi/acpica/hwesleep.c>>>>>>>> @@ -43,6 +43,7 @@>>>>>>>> */>>>>>>>>>>>>>>>> #include <acpi/acpi.h>>>>>>>>> +#include <linux/acpi.h>>>>>>>>> #include "accommon.h">>>>>>>>>>>>>>>> #define _COMPONENT ACPI_HARDWARE>>>>>>>> @@ -128,6 +129,13 @@ acpi_status acpi_hw_extended_sleep(u8>>>>>>>> sleep_state)>>>>>>>>>>>>>>>> ACPI_FLUSH_CPU_CACHE();>>>>>>>>>>>>>>>> + status = acpi_os_prepare_sleep(sleep_state,>>>> acpi_gbl_sleep_type_a,>>>>>>>> + acpi_gbl_sleep_type_b, true);>>>>>>>> + if (ACPI_SKIP(status))>>>>>>>> + return_ACPI_STATUS(AE_OK);>>>>>>>> + if (ACPI_FAILURE(status))>>>>>>>> + return_ACPI_STATUS(status);>>>>>>>> +>>>>>>>> /*>>>>>>>> * Set the SLP_TYP and SLP_EN bits.>>>>>>>> *>>>>>>>> diff --git a/drivers/acpi/acpica/hwsleep.c>>>>>>>> b/drivers/acpi/acpica/hwsleep.c index e3828cc..a93c299 100644>>>>>>>> --- a/drivers/acpi/acpica/hwsleep.c>>>>>>>> +++ b/drivers/acpi/acpica/hwsleep.c>>>>>>>> @@ -153,7 +153,7 @@ acpi_status acpi_hw_legacy_sleep(u8>> sleep_state)>>>>>>>> ACPI_FLUSH_CPU_CACHE();>>>>>>>>>>>>>>>> status = acpi_os_prepare_sleep(sleep_state, pm1a_control,>>>>>>>> - pm1b_control);>>>>>>>> + pm1b_control, false);>>>>>>>> if (ACPI_SKIP(status))>>>>>>>> return_ACPI_STATUS(AE_OK);>>>>>>>> if (ACPI_FAILURE(status))>>>>>>>> diff --git a/drivers/acpi/osl.c b/drivers/acpi/osl.c index>>>>>>>> e721863..3fc2801 100644>>>>>>>> --- a/drivers/acpi/osl.c>>>>>>>> +++ b/drivers/acpi/osl.c>>>>>>>> @@ -77,8 +77,8 @@ EXPORT_SYMBOL(acpi_in_debugger);>>>>>>>> extern char line_buf[80];>>>>>>>> #endif /*ENABLE_DEBUGGER */>>>>>>>>>>>>>>>> -static int (*__acpi_os_prepare_sleep)(u8 sleep_state, u32>> pm1a_ctrl,>>>>>>>> - u32 pm1b_ctrl);>>>>>>>> +static int (*__acpi_os_prepare_sleep)(u8 sleep_state, u32 val_a,>>>>>>>> +u32>>>>>> val_b,>>>>>>>> + u8 extended);>>>>>>>>>>>>>>>> static acpi_osd_handler acpi_irq_handler;>>>>>>>> static void *acpi_irq_context;>>>>>>>> @@ -1757,13 +1757,13 @@ acpi_status acpi_os_terminate(void)>>>>>>>> return AE_OK;>>>>>>>> }>>>>>>>>>>>>>>>> -acpi_status acpi_os_prepare_sleep(u8 sleep_state, u32>> pm1a_control,>>>>>>>> - u32 pm1b_control)>>>>>>>> +acpi_status acpi_os_prepare_sleep(u8 sleep_state, u32 val_a, u32>>>> val_b,>>>>>>>> + u8 extended)>>>>>>>> {>>>>>>>> int rc = 0;>>>>>>>> if (__acpi_os_prepare_sleep)>>>>>>>> - rc = __acpi_os_prepare_sleep(sleep_state,>>>>>>>> - pm1a_control, pm1b_control);>>>>>>>> + rc = __acpi_os_prepare_sleep(sleep_state, val_a, val_b,>>>>>>>> + extended);>>>>>>>> if (rc < 0)>>>>>>>> return AE_ERROR;>>>>>>>> else if (rc > 0)>>>>>>>> @@ -1772,8 +1772,8 @@ acpi_status acpi_os_prepare_sleep(u8>>>>>>>> sleep_state,>>>>>>>> u32 pm1a_control,>>>>>>>> return AE_OK;>>>>>>>> }>>>>>>>>>>>>>>>> _os_prepare_sleep = func;>>>>>>>> }>>>>>>>> diff --git a/include/linux/acpi.h b/include/linux/acpi.h index>>>>>>>> 17b5b59..de99022 100644>>>>>>>> --- a/include/linux/acpi.h>>>>>>>> +++ b/include/linux/acpi.h>>>>>>>> @@ -477,11 +477,11 @@ static inline bool>>>>>>>> acpi_driver_match_device(struct device *dev,>>>>>>>> #endif /* !CONFIG_ACPI */>>>>>>>>>>>>>>>> #ifdef CONFIG_ACPI>>>>>>>> _status acpi_os_prepare_sleep(u8 sleep_state,>>>>>>>> - u32 pm1a_control, u32 pm1b_control);>>>>>>>> +acpi_status acpi_os_prepare_sleep(u8 sleep_state, u32 val_a, u32>>>> val_b,>>>>>>>> + u8 extended);>>>>>>>> #ifdef CONFIG_X86>>>>>>>> void arch_reserve_mem_area(acpi_physical_address addr, size_t>>>> size);>>>>>>>> #else>>>>>>>> @@ -491,7 +491,7 @@ static inline void>>>>>>>> arch_reserve_mem_area(acpi_physical_address addr,>>>>>>>> }>>>>>>>> #endif /* CONFIG_X86 */>>>>>>>> #else>>>>>>>> -#define acpi_os_set_prepare_sleep(func, pm1a_ctrl, pm1b_ctrl) do>>>>>>>> { } while>>>>>>>> (0)>>>>>>>> +#define acpi_os_set_prepare_sleep(func, val_a, val_b, ext) do {>>>>>>>> +} while (0)>>>>>>>> #endif>>>>>>>>>>>>>>>> #if defined(CONFIG_ACPI) && defined(CONFIG_PM_RUNTIME)>>>>>>>> -->>>>>>>> 1.7.9.5>>>>>>>>>>>>>>>> -->>>>>>>> To unsubscribe from this list: send the line "unsubscribe linux->> acpi">>>>>>>> in the body of a message to majordomo@vger.kernel.org More>>>>>>>> majordomo info at
https://lkml.org/lkml/2013/7/24/377
CC-MAIN-2018-47
en
refinedweb
TDD: Testing sub classes We ran into another interesting testing dilemma while refactoring the view model code which I described in an earlier post to the point where we have an abstract class and three sub classes which means that we now have 3 classes which did the same thing 80% of the time. As I mentioned in a post a couple of weeks ago one of the main refactorings that we did was to move some calls to dependency methods from the constructor and into properties so that those calls would only be made if necessary. After we’d done this the code looked a bit like this: public abstract class ParentModel { private readonly Dependency1 dependency1; ... public decimal Field1 { get { return dependency1.Calculation1(); } } public decimal Field2 { get { return dependency1.Calculation2(); } } } public class BusinessProcess1Model : ParentModel { } public class BusinessProcess2Model : ParentModel { } public class BusinessProcess3Model : ParentModel { } We wanted to ensure that the tests we had around this code made sure that the correct calls were made to ‘depedency1’ but because ParentModel is an abstract class the only way that we can do this is by testing one of its sub classes. The question is should we test this behaviour in each of the sub classes and therefore effectively test the same thing three times or do we just test it via one of the sub classes and assume that’s enough? Neither of the options seems really great although if we cared only about behaviour then we would test each of the sub classes independently and forget that the abstract class even exists for testing purposes. While the logic behind this argument is quite solid we would end up breaking 3 tests if we needed to refactor our code to call another method on that dependency for example. I suppose that makes sense in a way since we have actually changed the behaviour of all those classes but it seems to me that we only really need to know from one failing test that we’ve broken something and anything beyond that is a bit wasteful. In C# it’s not actually possible for ‘Field1’ or ‘Field2’ to be overriden with an alternate implementation unless we defined those properties as ‘virtual’ on the ‘ParentModel’ which we haven’t done. We could however use the ‘new’ keyword to redefine what those properties do if the callee had a reference directly to the sub class instead of to the abstract class which means it is possible for a call to ‘Field1’ to not call ‘dependency1’ which means that maybe we do need to test each of them individually. I’m not sure which approach I prefer, neither seems better than the other in my mind. About the author Mark Needham is a Developer Relations Engineer for Neo4j, the world's leading graph database.
https://markhneedham.com/blog/2009/09/13/tdd-testing-sub-classes/
CC-MAIN-2018-47
en
refinedweb
. )¶. If wait is Falsetstatement,') 17.4.2. ThreadPoolExecutor¶ ThreadPoolExecutor is) - class concurrent.futures. ThreadPoolExecutor(max_workers=None)¶ An Executorsubclass that uses a pool of at most max_workers threads to execute calls asynchronously.. 17.4.2.1. ThreadPoolExecutor Example¶ import concurrent.futures import urllib.request URLS = ['', '', '', '', ''] # Retrieve a single page and report the URL and contents def load_url(url, timeout): with urllib.request.urlopen(url, timeout=timeout) as conn: return conn.read() #))) 17.4.3. ProcessPoolExecutor¶. - class concurrent.futures. ProcessPoolExecutor(max_workers=None. Changed in version 3.3: When one of the worker processes terminates abruptly, a BrokenProcessPoolerror is now raised. Previously, behaviour was undefined but operations on the executor or its futures would often freeze or deadlock..process. BrokenProcessPool¶ Derived from RuntimeError, this exception class is raised when one of the workers of a ProcessPoolExecutorhas terminated in a non-clean fashion (for example, if it was killed from the outside). New in version 3.3.
http://docs.activestate.com/activepython/3.5/python/library/concurrent.futures.html
CC-MAIN-2018-47
en
refinedweb
US8266706B2 - Cryptographically controlling access to documents - Google PatentsCryptographically controlling access to documents Download PDF Info - Publication number - US8266706B2US8266706B2 US11698369 US69836907A US8266706B2 US 8266706 B2 US8266706 B2 US 8266706B2 US 11698369 US11698369 US 11698369 US 69836907 A US69836907 A US 69836907A US 8266706 B2 US8266706 B2 US 8266706B2 - Authority - US - Grant status - Grant - Patent type - - Prior art keywords - document - key - data - device - Abstract Description Granting access to user data is typically performed programmatically. That is, an operating system or web service grants access to the data based on access rights of the user. This model is not very secure, particularly in web-hosted environments in which the user's data is stored on a server that is accessible by many other users or processes. If the security of the server is compromised, the user's data may be accessed without the user's permission or knowledge. The more entities that are involved in handling a user's data, the less secure the data is. Briefly, aspects of the subject matter described herein relate to cryptographically controlling access to documents. In aspects, documents are encrypted to protect them from unauthorized access. A security principal seeking to access a document first obtains the document. The document includes an identifier that identifies security data associated with the document. The security data includes an encrypted portion that includes authorizations for security principals that have access to the document. A security principal having the appropriate key can decrypt its authorization in the security data to obtain one or more other keys that may be used to access the document. These other keys correspond to access rights that the security principal has with respect to the document. “at least one aspect.” Identifying aspects of the subject matter described in the Detailed Description is not intended to identify key or essential features of the claimed subject matter. The aspects described above and other aspects of the subject matter described herein are illustrated by way of example and not limited in the accompanying figures in which like reference numerals indicate similar elements and, Controlling Access to Documents A user may use a user device 205 to store data on the storage device 210. The user data may then be accessed by the user devices 205-207, the services 220-223, and the hosted application 215. The user data may also be replicated on the replication devices 211-212. The user devices 206 and 207 may be operated by the user who stored the data or may be operated by other users whom the user has given access rights to the data. For example, a user may have a computer (e.g., user device 205) at work with which the user stores the data on the storage device 210. At home, the user may have another computer (e.g., user device 206) with which the user accesses the data. A user may also have a cell phone or other electronic device (e.g., user device 207) with which the user accesses the data. When a user is traveling, the user may access the data via a computer the user takes with him or via another computer or electronic device the user is able to use. As mentioned previously, the user may desire to have other users have access to the data and may grant the other users such access. These users may use computers or other electronic devices (e.g., user devices 206 and 207) to access the data according to their access rights. The user may desire to access the data via a hosted application 215. The user may access the hosted application 215 via a web browser, for example, and may then access the data via the hosted application 215. The user may desire to have certain services have access to the user's data. For example, the user may wish to have an ad server 221 access the user's data to provide relevant ads to the user or others. The user may desire to have a search engine 220 have access to the user's data to allow others to find the user's data. The user may desire to have an archival service 222 have access to the data to create archival backups of the data. The user may also desire to have other services 223 have access to the data for various purposes. The user may desire each entity with access to the user data be given a certain set of access rights that may vary from entity to entity. For example, the user may desire an archival service to be able to copy the data but not to be able to read the data in a meaningful way or to modify the data. Being able to copy the data without reading it in a meaningful way or modifying it is sometimes referred to as “copy-only” access. As another example, the user may desire to have the ad server 221 and the search engine 220 be able to read the data but not be able to write to the data. The user may desire to have some colleagues have read/write access to the data while other business associates have read access or copy-only access to the data. The network 225 represents any mechanism and/or set of one or more devices for conveying data from one entity to another and may include intra- and inter-networks, the Internet, phone lines, cellular networks, networking equipment, and the like. The user may desire to have devices of the network 225 be able to copy the data to transmit it to other entities but not to be able to change the data or read it in a meaningful way. Examples of the devices (e.g., devices 205-207 and 210-212) include cell phones, text messaging devices, smart phones, networking devices, the special and general purpose electronic devices (or portions thereof) as described in conjunction with As will be recognized by those of skill in the art, having many entities handling or having access to the data makes it more difficult to keep the data secure and to ensure that access is controlled as desired. Aspects of the subject matter described herein address controlling the access as described below. In one embodiment, the requesting entity is an electronic device such as a computer and the intermediary entities 310 and 330 are zero or more networking devices, servers, or other devices that are between the requesting entity and the storage access entity 315 and/or the security data repository 335. The storage access entity 315 is the device that is capable of accessing the storage device (e.g., the storage device 320) upon which a requested document is stored. Document as used herein includes any set of bits of any length that are capable of being stored on a storage device. As will be discussed in further detail in conjunction with Because the data is encrypted, it can only be meaningfully read by someone who has a key for decrypting the data. As will be discussed in further detail below, these keys are kept in security data in a security data repository. With the appropriate key, a user can decrypt the encrypted data and access the content therein. The storage device 320 is any computer-readable medium capable of storing data and may include distributed file systems, for example. Some exemplary computer-readable media that are suitable for the storage device 320 have been described above in conjunction with The security data repository 335 stores security data pertaining to the documents stored on the storage device 320. The security data repository 335 may include one device or several devices that work in concert with each other. The security data repository 335 may include a security data record for each version of a document. The requesting entity 305 may request a security data record corresponding to a retrieved document by sending a security data identifier included in the document to the security data repository and requesting the security data identified thereby. In one embodiment, the security data may be stored in the document itself. In this embodiment, the requesting entity may obtain the security data directly from the document. In one embodiment, one or more of the entities 305, 310, 315, and 330 may be one or more processes or components that execute on one or more devices. In one embodiment, the storage device 320 and/or the security data repository 335 may be devices included in or attached to the device upon which the requesting entity 305 executes. Documents stored in the storage device 320 may be placed there by a user of the device upon which the requesting entity 305 executes, from another device, or may be placed there by a file replicating infrastructure, for example. As can been seen, in an exemplary operating environment described above in conjunction with The document identifier 405 may be used to uniquely identify a document in a given namespace. For example, a uniform resource identifier (URI) having a http-like syntax (e.g., live://alice/users/file1.txt) may be used to identify documents in a given namespace. The security data identifier 410 may be used to identify security data associated with the document. In one embodiment, the security data identifier 410 is a hash of the fields (other than itself) in a security data structure (e.g., the security data structure 427). A hash takes an input data and calculates a fixed length output data. Given a sufficiently large fixed length output data and a suitable hash, the hash effectively provides a unique identifier for the input stream. The timestamp field 410 may include a timestamp that indicates when the version was created. As discussed previously, the encrypted data field 420 may include any content that the user wishes to secure. The signature field 425 comprises any one or more mechanisms that may be used to ensure that the document version data structure 400 was created by an authorized user and has not changed since creation. The document version data structure 400 may include more or fewer fields as long as is includes a mechanism for identifying or including security data pertaining to the document and a mechanism for encrypting desired data. The security data structure 427 may include a security data identifier field 430, one or more authorization fields 435, one or more keys 440, and a signature 425. In one embodiment, the security data identifier in the security data identifier field 430 may be calculated as described previously (i.e., as a hash of the other fields of the security data structure 427). The authorization fields 435 include an authorization for each security principal that is to have access to the document version data structure 400. In some embodiments, a security principal is an entity that can be positively identified and verified via a technique known as authentication. In other embodiments, a security principal may comprise a key decrypted from the security data associated with another document. A security principal may include a user, machine, service, process, other entity, decrypted key, or multiple (e.g., groups) of one or more of the above. Each authorization may be encrypted by a key that may be decrypted by a key held by or created by the security principal. Public key/private key cryptography is one mechanism that may be used to encrypt/decrypt an authorization. As a particular security principal may have many keys and there may be many authorizations in a security document, in one embodiment, an optimization provides a key hint that provides the first few bits (in plain text) of a key that may be used to decrypt the authorization. The key hint allows an entity to quickly determine which authorizations it should attempt to decrypt as the entity can simply compare the first few bits with its key. When there are hundreds or thousands of authorizations, the time savings provided by this mechanism may be substantial. Because only a few bits (e.g., between 2 and 16) may be provided, the strength of the mechanism used to encrypt/decrypt the authorizations may not be significantly weakened. If needed, the strength of the mechanism may be increased by using longer keys. In one embodiment, an authorization includes encrypted keys that allow a security principal to perform one or more access rights with respect to a document version. For example, a user principal may be given the rights to read the document, create new versions of the document, change which security principals may access the document, and perform any other security-related actions with respect to the document. Another user principal may be given read-only or write-only access. Entities that are not given any rights with respect to a document may still have copy-only access (i.e., the ability to copy but not meaningfully read the encrypted data). Such entities may be used, for example, for archiving documents. In another embodiment, the authorization may include an encrypted key that allows the security principle to decrypt additional keys elsewhere (e.g., in key(s) 440) of the security data structure 427. These additional keys may grant access rights to the document to the security principal. This may be done, for example, to reduce the space needed for the security data structure 427 as a single key in an authorization may be used to decrypt multiple keys elsewhere in the security data structure 427. When a security data structure 427 includes hundreds or thousands of authorizations, many authorizations may share a common set of access rights. While the keys corresponding to these access rights could be included in the authorization itself, it may be more space efficient to provide a single key in each authorization that allows the security principals to decrypt the access keys elsewhere in the security data structure 427. The keys 440 may include encrypted private keys as discussed previously that may correspond to access rights granted in the document. These keys may be decrypted by keys obtained in the authorization(s) field 435 as discussed previously. The signature field 445 may be used in a similar fashion as the signature field 425 of the data structure 400. The security data structure 427 may include more or fewer fields as long as it includes a mechanism for providing keys to access its associated document(s) to authorized users. The document version data structure 400 may include an identifier that identifies another document version data structure. The other document version data structure may include a key that allows access to the document. This mechanism may be used to provide group access to a document. For example, the authorizations in the security data structure associated with the first document version data structure may correspond to keys held by members of a group. Any member of the group who has an appropriate key may be able to obtain a member key from the security data that allows the member to access the second document according to rights granted to the group in the security data associated with the second document. Thus, accessing a document may involve accessing an intermediate document. In another embodiment, the document version data structure 400 may omit the identifier. In this embodiment, another mechanism may suggest that the keys in the first document's security data may provide access to the second document. For example, if the first document was known to provide group access to another document, the member key from the first document's security data may be tried on every authorization in the security data for every other document the user attempts to access. Key hints as described previously may speed this process. At block 510, an entity requests a document that includes encrypted data. For example, referring to At block 515, an entity receives the request. For example, referring to At block 520, the document is sent to the requester. For example, referring to At block 525, the requestor obtains the document. For example, referring to At block 530, an entity makes a request and obtains security data associated with the document. For example, referring to At block 535, at least a portion of the security data (e.g., an authorization) is decrypted to obtain an indication of authorized access action(s) pertaining to the document. For example, referring to At block 540, a key corresponding to the action is obtained from the security data. In one embodiment, the key is obtained the decryption action of block 535; in another embodiment, the key is obtained from another portion of the security document. For example, referring to At block 545, the key is used to perform the action. For example, referring to At block 550, the actions end. In one embodiment, the actions occur in the order described in conjunction with The requesting component 610 represents the requesting entity described previously. The cryptographic component 615 is used to encrypt and decrypt data and may comprise a library of cryptographic routines, for example. The document locator 620 determines where the document is located which will either be on the local data store 630 or on some data store external to the device 605. The security data component 625 interacts with the security data to obtain access rights pertaining to the document. The communications mechanism 635 allows the device 605 to communicate with other devices to obtain documents and security data, for example. The communications mechanism 640 may be a network interface or adapter 170, modem 172, or any other means for establishing communications as described in conjunction with It will be recognized that other variations of the device 605 shown in As can be seen from the foregoing detailed description, aspects have been described related to cryptographically controlling access to documents..
https://patents.google.com/patent/US8266706B2/en
CC-MAIN-2018-47
en
refinedweb
In part 1, I introduced the code for profiling, covered the basic ideas of analysis-driven optimization (ADO), and got you started with the NVIDIA Nsight Compute profiler. In part 2, you began the iterative optimization process. In this post, you finish the analysis and optimization process, determine whether you have reached a reasonable stopping point, and I draw some final conclusions. Converting the reduction to warp-shuffle The result of the analysis from part 2 is that your focus has been placed on the following line of code, with the idea of reducing shared memory pressure: if (id < s) smem[id] += smem[id+s];} What can you do? In the code refactoring of the previous step, you converted to a warp-stride loop, to permit coalesced access. That resulted in the averaging sum operation spreading across all 32 members of the warp. Thus, you had to combine these, before computing the average. You used a warp-shuffle reduction there, for convenience and simplicity. The line of code you are focused on now is also part of a reduction, but it is using a classical shared-memory sweep parallel reduction methodology. You can reduce the pressure on shared memory here, by converting the reduction to use a similar warp-shuffle based reduction methodology. Because this involves multiple warps in this second phase of your kernel activity, the code is a two-stage warp-shuffle reduction. For more information about warp-shuffle, see Faster Parallel Reductions on Kepler. The refactored kernel looks like the following code example: template __global__ void gpu_version4(const T * __restrict__ input, T * __restrict__ output, const T * __restrict__ matrix, const int L, const int M, const int N){ // parallelize threadIdx.x over vector length, and blockIdx.x across k (N) // do initial vector reduction via warp-stride loop __shared__ T smem[my_L]; int idx = threadIdx.x; int idy = threadIdx.y; int id = idy*warpSize+idx; int k = blockIdx.x; T v1; for (int y = threadIdx.y; y < L; y+=blockDim.y){ // vertical block-stride loop v1 = 0; for (int x = threadIdx.x; x < M; x+=warpSize) // horizontal warp-stride loop v1 += input[k*M*L+y*M+x]; for (int offset = warpSize>>1; offset > 0; offset >>= 1) // warp-shuffle reduction v1 += __shfl_down_sync(0xFFFFFFFF, v1, offset); if (!threadIdx.x) smem[y] = v1/M;} __syncthreads(); v1 = smem[id]; for (int i = 0; i < L; i++){ // matrix-vector multiply T v2 = v1 * matrix[i*L+id]; // 1st warp-shuffle reduction for (int offset = warpSize>>1; offset > 0; offset >>= 1) v2 += __shfl_down_sync(0xFFFFFFFF, v2, offset); if (idx == 0) smem[idy] = v2; __syncthreads(); // put warp results in shared mem // hereafter, just warp 0 if (idy == 0){ // reload v2 from shared mem if warp existed v2 = (idx < ((blockDim.x*blockDim.y)>>5))?smem[idx]:0; // final warp-shuffle reduction for (int offset = warpSize>>1; offset > 0; offset >>= 1) v2 += __shfl_down_sync(0xFFFFFFFF, v2, offset);} if (!id) output[k+i*N] = v2;} } You have replaced your shared-memory sweep reduction with a two-stage warp-shuffle reduction. No changes are needed at the kernel launch point, other than to change the name of the kernel to your new gpu_version4. If you compile and run this code, you see an additional speedup: CPU execution time: 0.5206s Kernel execution time: 0.012659s Return to the profiler. Repeat the disconnect, connect, launch, run sequence and then reset the baseline. Figure 1 shows the results. The bottleneck rule has pointed you back to latency again, with the same message as in the previous post, because this latest change relieved pressure on most of the various GPU subsystems. The latency that the profiler is now pointing out is just the memory latency inherent in your loading of ~4 GB of data for this processing. You can get a sense of this by looking at the Warp State Statistics section, where you now see Stall Long Scoreboard as your most significant stall reason (Figure 2). Hover over Stall Long Scoreboard for the description: “Average number of warps resident per issue cycle, waiting on a scoreboard dependency on L1TEX (local, global, surface, tex) operation”. For more information, see Warp Scheduler States. Likewise, the rule states: [Warning] On average each warp of this kernel spends 15.4 cycles being stalled waiting for a scoreboard dependency on a L1TEX (local, global, surface, texture) operation. This represents about 46.1% of the total average of 33.3 cycles between issuing two instructions. To reduce the number of cycles waiting on L1TEX data accesses verify the memory access patterns are optimal for the target architecture, attempt to increase cache hit rates by increasing data locality or by changing the cache configuration, and consider moving frequently used data to shared memory. You already know you don’t have local, surface, or (explicit) tex operations, so global memory is again the focus. You can again get an additional sense of this by looking at the source view (Figure 3). The following line of code is dominating the sampling data, as well as being the biggest contributor to your warp stall reasons: v1 += input[k*M*L+y*M+x]; Hover the mouse over the brown bars to get the warp stall reasons. It is easy to see in Figure 3, but what if it were less obvious or you had more code to sort through? The profiler can help you here. On the Details page, the Warp State Statistics section listed the highest stall reason as Stall Long Scoreboard. How can you find the line with the highest contributor to that? First, for Navigation, select stall_long_sb. Then, choose the button to the right with an up-arrow and a line. This asks the profiler to show the line with the highest reported value for that metric (Figure 4). The profiler highlighted the expected line for you. You have optimized this step as much as possible. How can you be sure of that? At this point, to achieve your next (and final) round of optimization and to answer this important question, you must revisit your code and consider more major refactoring. At a high level, your code is producing a set of intermediate vectors that are the results of the averaging phase and then multiplying each of those vectors by an unchanging matrix to get a set of result vectors. This second phase of operations, the matrix-vector multiply step, could be refactored to be a matrix-matrix multiply because the input matrix is constant across each matrix-vector multiply step. You could rewrite or decompose your kernel into two separate kernels. The first kernel performs the vector averaging, writing out the set of average vectors as a matrix. The second kernel performs the matrix-matrix multiply. Rather than writing your own kernel for this second phase refactored, use cuBLAS, a highly optimized library. This refactoring also means that you must store the intermediate vector results in global memory to facilitate passing them to the cuBLAS gemm (matrix-matrix multiply) operation, along with the input matrix. This store to global was not necessary in your previous realizations, because you could just carry the vector forward in local thread storage, for use in the matrix-vector multiply. This refactoring also isolates the vector averaging step, which allows you to get an independent measurement of whether this step is truly optimal. The first phase vector averaging, now isolated in its own kernel, is dominated by the global load operations of 4 GB of data, focused on the line of code the profiler has already indicated in this step. Refactoring redux As indicated in the previous section, your task now is to refactor your code by breaking the kernel into two pieces, the first of which is your existing phase 1 kernel code but writing out the intermediate vector into a matrix of results, in global memory. The second piece is a properly crafted cuBLAS SGEMM call, to perform the matrix-matrix multiply operation. Getting this right involves accounting for the transpositions needed on the input data and when comparing the results for accuracy. The final refactored code looks like the following code example: // compile with: nvcc -Xcompiler -fopenmp -o t5 t5.cu -O3 -lineinfo -lcublas #include #include #include #define cudaCheckErrors(msg) \ do { \ cudaError_t __err = cudaGetLastError(); \ if (__err != cudaSuccess) { \ fprintf(stderr, "Fatal error: %s (%s at %s:%d)\n", \ msg, cudaGetErrorString(__err), \ __FILE__, __LINE__); \ fprintf(stderr, "*** FAILED - ABORTING\n"); \ exit(1); \ } \ } while (0) // cuBLAS API errors static const char *_cudaGetErrorEnum(cublasStatus_t error) { switch (error) { case CUBLAS_STATUS_SUCCESS: return "CUBLAS_STATUS_SUCCESS"; case CUBLAS_STATUS_NOT_INITIALIZED: return "CUBLAS_STATUS_NOT_INITIALIZED"; case CUBLAS_STATUS_ALLOC_FAILED: return "CUBLAS_STATUS_ALLOC_FAILED"; case CUBLAS_STATUS_INVALID_VALUE: return "CUBLAS_STATUS_INVALID_VALUE"; case CUBLAS_STATUS_ARCH_MISMATCH: return "CUBLAS_STATUS_ARCH_MISMATCH"; case CUBLAS_STATUS_MAPPING_ERROR: return "CUBLAS_STATUS_MAPPING_ERROR"; case CUBLAS_STATUS_EXECUTION_FAILED: return "CUBLAS_STATUS_EXECUTION_FAILED"; case CUBLAS_STATUS_INTERNAL_ERROR: return "CUBLAS_STATUS_INTERNAL_ERROR"; } return ""; } #include #include <sys/time.h> #define USECPSEC 1000000ULL unsigned long long dtime_usec(unsigned long long start){ timeval tv; gettimeofday(&tv, 0); return ((tv.tv_sec*USECPSEC)+tv.tv_usec)-start; } // perform vector averaging over M vectors of length L, followed by matrix-vector multiply // repeat the above N times // input vectors are stored as a set of N column-major matrices // for each k in N: output[k] = matrix*input[k] template void cpu_version1(T *input, T *output, T *matrix, int L, int M, int N){ #pragma omp parallel for for (int k = 0; k < N; k++){ // repeat the following, N times std::vector v1(L); // vector length of L for (int i = 0; i < M; i++) // compute average vector over M input vectors for (int j = 0; j < L; j++) v1[j] += input[k*M*L+j*M+i]; for (int j = 0; j < L; j++) v1[j] /= M; for (int i = 0; i < L; i++) // matrix-vector multiply for (int j = 0; j < L; j++) output[i*N+k] += matrix[i*L+j]*v1[j]; } } const int my_L = 1024; // maximum 1024 const int my_M = 1024; const int my_N = 1024; template __global__ void gpu_version5(const T * __restrict__ input, T * __restrict__ output, const int L, const int M, const int N){ // parallelize threadIdx.x over vector length, and blockIdx.x across k (N) // do initial vector reduction via warp-stride loop int k = blockIdx.x; T v1; for (int y = threadIdx.y; y < L; y+=blockDim.y){ // vertical block-stride loop v1 = 0; for (int x = threadIdx.x; x < M; x+=warpSize) // horizontal warp-stide loop v1 += input[k*M*L+y*M+x]; for (int offset = warpSize>>1; offset > 0; offset >>= 1) // warp-shuffle reduction v1 += __shfl_down_sync(0xFFFFFFFF, v1, offset); if (!threadIdx.x) output[k+y*N] = v1/M;} } typedef float ft; int main(){ ft *d_input, *h_input, *d_output, *h_outputc, *h_outputg, *d_matrix, *h_matrix, *d_result; int L = my_L; int M = my_M; int N = my_N; // host allocations h_input = new ft[N*L*M]; h_matrix = new ft[L*L]; h_outputg = new ft[N*L]; h_outputc = new ft[N*L]; // data initialization for (int i = 0; i < N*L*M; i++) h_input[i] = (rand()&1)+1; // 1 or 2 for (int i = 0; i < L*L; i++) h_matrix[i] = (rand()&1)+1; // 1 or 2 // create result to test for correctness unsigned long long dt = dtime_usec(0); cpu_version1(h_input, h_outputc, h_matrix, L, M, N); dt = dtime_usec(dt); std::cout << "CPU execution time: " << dt/(float)USECPSEC << "s" << std::endl; // device allocations cudaMalloc(&d_input, N*L*M*sizeof(ft)); cudaMalloc(&d_output, N*L*sizeof(ft)); cudaMalloc(&d_matrix, L*L*sizeof(ft)); cudaMalloc(&d_result, N*L*sizeof(ft)); cudaCheckErrors("cudaMalloc failure"); // copy input data from host to device cudaMemcpy(d_input, h_input, N*L*M*sizeof(ft), cudaMemcpyHostToDevice); cudaMemcpy(d_matrix, h_matrix, L*L*sizeof(ft), cudaMemcpyHostToDevice); cudaMemset(d_output, 0, N*L*sizeof(ft)); cudaCheckErrors("cudaMemcpy/Memset failure"); // cublas setup cublasHandle_t h; ft alpha = 1.0; ft beta = 0.0; cublasStatus_t c_res = cublasCreate(&h); if (c_res != CUBLAS_STATUS_SUCCESS) {std::cout << "CUBLAS create error: " << _cudaGetErrorEnum(c_res) << std::endl; return 0;} // run on device and measure execution time dim3 block(32,32); dt = dtime_usec(0); gpu_version5<<<N, block>>>(d_input, d_output, L, M, N); cudaCheckErrors("kernel launch failure"); c_res = cublasSgemm(h, CUBLAS_OP_T, CUBLAS_OP_T, N, N, L, &alpha, d_matrix, L, d_output, N, &beta, d_result, N); if (c_res != CUBLAS_STATUS_SUCCESS) {std::cout << "CUBLAS gemm error: " << _cudaGetErrorEnum(c_res) << std::endl; return 0;} cudaDeviceSynchronize(); cudaCheckErrors("execution failure"); dt = dtime_usec(dt); cudaMemcpy(h_outputg, d_result, N*L*sizeof(ft), cudaMemcpyDeviceToHost); cudaCheckErrors("cudaMemcpy failure"); for (int i = 0; i < N; i++) for (int j = 0; j < L; j++) if (h_outputg[i+N*j] != h_outputc[j+N*i]) {std::cout << "Mismatch at " << i << " was: " << h_outputg[i] << " should be: " << h_outputc[i] << std::endl; return 0;} std::cout << "Kernel execution time: " << dt/(float)USECPSEC << "s" << std::endl; return 0; } If you compile and run this code, you get results like the following example: $ nvcc -o t5 t5.cu -Xcompiler -fopenmp -O3 -lineinfo -lcublas $ ./t5 CPU execution time: 0.521357s Kernel execution time: 0.005525s $ You have again improved the performance of your code and your GPU implementation is now almost 100x faster than your CPU OpenMP version. To be fair, this final optimization to convert the sequence of matrix-vector multiply operations into a single matrix-matrix multiply could equivalently be done on the CPU version. Using a high-quality CPU BLAS library would also probably give a better result there. What about the question asked earlier, “Is your global load operation optimal?” Because the first kernel is now dominated by the global loading of 4 GB of data, you can estimate the achieved bandwidth and compare it to a proxy measurement of the achievable memory bandwidth on your GPU. If the two numbers are close to each other, you can conclude that the global loading operation is nearly optimal and could not get any better. For the proxy measurement of achievable memory bandwidth on your GPU, use the CUDA sample code bandwidthTest. When run on this V100 GPU, the output looks like the following code example: $ /usr/local/cuda/samples/bin/x86_64/linux/release/bandwidthTest [CUDA Bandwidth Test] - Starting... Running on... Device 0: Tesla V100-SXM2-32GB Quick Mode Host to Device Bandwidth, 1 Device(s) PINNED Memory Transfers Transfer Size (Bytes) Bandwidth(GB/s) 32000000 12.4 Device to Host Bandwidth, 1 Device(s) PINNED Memory Transfers Transfer Size (Bytes) Bandwidth(GB/s) 32000000 13.2 Device to Device Bandwidth, 1 Device(s) PINNED Memory Transfers Transfer Size (Bytes) Bandwidth(GB/s) 32000000 739.2 The last number is the one you are interested in. A V100 has about 740 GB/s of available memory bandwidth, according to this measurement. The GB used here is 1 billion bytes, not 2^30 bytes. To get a comparison for your kernel, you must get a timing duration for just the kernel, not the kernel plus cuBLAS call. Of course, you could modify the timing in your code to print this out but look at the profiler one last time. There are now two kernels in your code: one that you wrote, and one that is launched by the cuBLAS call. Not all cuBLAS calls result in one single kernel call, but this usage does. Now when you disconnect, connect, launch, and choose Run to next kernel, you profile just your version 5 kernel. The profiler reports the execution duration as 5.22 milliseconds (Figure 5). This is most of the overall execution time that you measured of ~5.5 milliseconds! The 4 GB of data that you have is 4x1024x1024x1024 bytes. If you divide that by 5.22 milliseconds, you get an achieved bandwidth of approximately 823 GB/s, using the GB that is used by bandwidthTest. So, your averaging kernel is performing even better than bandwidthTest and is approximately optimal. The profiler output also indicates greater than 90% memory utilization, agreeing with your measurement. Choose Run to next kernel one more time, because you still have the cuBLAS SGEMM kernel waiting in the wings. Figure 6 shows the results in the GPU Speed of Light section after the baseline is cleared. As you suspected, this kernel ( volta_sgemm_32x128_tt) is short, around 220 microseconds, making up most of the difference between your 5.2 millisecond global load kernel time and the overall measured duration of ~5.5 milliseconds. The profiler also reports this highly optimized library kernel is running the GPU at a high level of both compute utilization and memory utilization. You have some solid data now that says your code is roughly optimal, and now you should spend your precious time elsewhere. Suggestions The Nsight Compute profiler provides a large amount of information. In this post, I’ve only scratched the surface of the data presented and the tool’s capabilities. No post like this could hope to give a complete treatment. The amount of information here may require some effort to process. The following are some observations and suggestions to help you: - Shorter is better. If you have a large kernel to analyze, the analysis is usually going to be more difficult. Some of the conclusions that you reached in this post were aided by the fact that you had only 10-20 lines of source code in your kernel to study. - Kernels that change behavior are more difficult to perform high-level analysis on. The kernel in this example changed behavior from the first phase (vector averaging) to the second phase (matrix-vector multiply). In these cases, it may be expedient to break the kernel up into pieces that correspond to the behavioral phases of activity. Summary This post focused on an ADO process using Nsight Compute and a single kernel. If you have a complex application with multiple kernels, the ADO process usually starts with Nsight Systems, and you may iterate back and forth between Nsight Systems and Nsight Compute, as you optimize kernels and other kernels move to the top of the priority list. The analysis work in this post was performed on a machine with the following characteristics: Ubuntu 18.04.3, CUDA 11.1, GPU Driver version 455.23.05, GCC version 7.5.0, V100-SXM2-32 GB GPU, Intel® Xeon® Gold 6130 CPU @ 2.10GHz. The code examples presented in this post are for instructional purposes only. They are not guaranteed to be defect-free or suitable for any particular purpose. For more information, see the following resources: - Hands-on optimization tutorial for NVIDIA Nsight tools - GPU Performance Analysis video (part 8 of a 9-part CUDA Training Series that NVIDIA presented for OLCF and NERSC) - Accelerating HPC Applications with NVIDIA Nsight Compute Roofline Analysis. - Roofline and NVIDIA Ampere GPU Architecture Analysis video Acknowledgements The author would like to thank the following individuals for their contributions: Sagar Agrawal, Rajan Arora, Ronny Brendel, Max Katz, Felix Schmitt, Greg Smith, and Magnus Strengert.
https://developer.nvidia.com/blog/analysis-driven-optimization-finishing-the-analysis-with-nvidia-nsight-compute-part-3/
CC-MAIN-2022-40
en
refinedweb
@Yveaux I am so terribly embarrassed! In order to rule out something wrong with the WS5100 I rebuilt the GW using an old Arduino UNO with an Ethernet shield. Still same problem. Then to really make sure I pasted/copied username and password from the cloudmqtt into the sketch. Immediate success! It turned out that I had misread an uppercase I (indigo) for a lower case l (lima). Still the suggestions from the community were good since I could iron out one possible cause after another. Thanks for the support! bgunnarb @bgunnarb Retired MSc. with electronics (and other things) as a hobby. Fairly adept in HW and with some knowledge of C++, Java and Unix but still learning. Best posts made by bgunnarb - RE: [SOLVED] Cannot get GwWS5100MQTT to connect to cloudmqtt.com -: Battery life for Motion Sensor! - RE: DHT11 Example Code on Arduino I think the reason is that you have not set a node ID. At least this is not visible in the sketch. OpenHab does not hand out node IDs. I have good experience. You should try setting #define MY_NODE_ID 1 In the beginning of your sketch where you define Child ID. - RE: 💬 Atmospheric Pressure Sensor. - RE: How to calibrate a gauge sensor . - RE: DHT11 Example Code on Arduino Oops! I may have misled you. The #define MY_NODE_ID 1 has to go into the sketch before you call #include <MySensors.h> as pointed out in the description of the Library API definitions: "Remember to set configuration defines before including the MySensors.h." In my sketches I always do all the #define (s) first like below: //> // Rest of code goes here... - RE: MQTT GW issues with sending MQTT msg Did you try MQTT.fx? I use that SW (free of charge) a lot when troubleshooting MQTT. Start by checking that MQTT.fx really does connect with the MQTT broker. Then subscribe first to the topic #, which means everything. Then narrow down by subscribing to mygateway1-out/# and so on. The latest version of MQTT.fx also has a Topics Collector that shows all topics on the broker as they arrive. - - RE: How to calibrate a gauge sensor . Latest posts made by bgunnarb - RE: Gateway sends NACK to node Can you paste the code for the node please? Just a hint: Have you set the node ID manually? HA does not assign node ID automatically as far as I know. You have to set it manually in the code using: #define MY_NODE_ID 23 Node-ID is a number between 1 and 254, unique to each node. The default is AUTO which requests an ID from the controller. HA does not assign node ID. - RE: Can a sensor "get lost" between a GW and a repeater? Thanks to @electrik and @mfalkvidd for your answers. I will try next time I have the opportunity. Currently the sensor is 400 km away from home. - Can a sensor "get lost" between a GW and a repeater? I have a battery operated sensor that is placed at a distance to the GW where it should get a good connection. Distance is only 5 meters with wooden walls in between. But very close to this sensor is another sensor with the REPEATER function enabled. The first sensor connects to the GW but looses contact after a few hours. After a reset of the sensor, it functions again. Question: Could the case be, that the battery operated sensor loses the path to the GW i.e. the routing is lost somehow if it starts oscillating between connecting to the GW and the repeater? I have checked obvious causes of failure like wiring, capacitor etc. If the answer is no, then I have other things to check. I just want to rule out this cause of failure. - RE: Problems ethernet GW with ESP8266- NodeMcu V3.4!" - RE: Problems ethernet GW with ESP8266- NodeMcu V3.4. */ #include <EEPROM.h> #include <SPI.h> //. - -: MySensors --> MQTT --> OpenHab 2.5 Let me offer my thoughts and recommendations: For myself I have been using MQTT as the connection method for a number of years. Doing that, you must have an MQTT broker as the "middle-man" to receive MQTT messages from the MySensors MQTT gateway and then relaying them to OH, which then uses the MQTT binding. Then you do not use the MySensors binding to OH at all. The advantage is that you can have a number of sources/sensors feeding information to the MQTT broker. In my case I have three geographical sites with MySensors nodes, each with an MySensors MQTT gateway, feeding the broker. Actually at one of the sites there is also a Sonoff switch that also reports status and gets commands via MQTT. (I am using the Espurna SW on the Sonoff). Then my OH installation subscribes to the broker via the OH MQTT binding. To dive deeper into details, because of this geographical set-up, I am using a commercial MQTT broker, which is accessible over the internet from all three sites. If you only operate one site, then you are better off using e.g. Mosquitto inside your firewall. As far as I understand the OH MQTT binding does not offer auto-discovery unless you use the Homie naming convention. Check the OH documentation please. For MySensors nodes I use the MySensors notation so a typical message looks like: mqttgw1/28/3/1/0/0 23.2 for say, a node-ID 28 and child-ID 3 that reports a temperature of 23.2 °C When I started using MQTT and the 1.0 binding I made all definitions in the .items text file but when I converted to MQTT 2.0 I have made all the definitions in the PaperUI. It is either/or depending on your preferences but I think the general recommendation in the OH community is to use the PaperUI. If you stick with text files I strongly recommend that you use Visual Studio Code and the OpenHAB extension to VS. Then you get syntax checking and nice colours to all keywords. As far as I understand, the MySensors OH-binding uses the ethernet or serial MySensors GW. Probably the set-up is simpler in the beginning but offers less flexibility in the long run. I could never have achieved my set-up above using just a serial connection. - RE: Home Assistent + Serial Gateway + Motion Sensor I'm not very good with HA but in the GW sketch, to me it seems that you test if the Arduino is running on 8 MHz (#if F_CPU == 8000000L), which I guess it is not. So the #DEFINE MY_BAUD_RATE 115200 is never executed. So then the GW runs at what baud rate? Try to move the #DEFINE outside of the #if -- #endif and maybe set a lower baud rate, also in the HA .yml - RE: Is MQTT Necessary? or, Use Case for MQ.
https://forum.mysensors.org/user/bgunnarb
CC-MAIN-2022-40
en
refinedweb
File Server and Storage Restructure Steven Teixeira 1 MonthDuration 50Users 50Devices The challenge was to bridge the file storage gaps within the company and come up with better ways to handle files and documents that company users work with on a daily basis. Turlock, California, United States Team Members Tagged MSPs Categories Data Storage, Backup & Recovery Windows Before I started this project, each office in the company had a local file server that hosted a shared folder that contained all the files that all the users in the company had been working on for years. All folders had permissions that were lumped together under one group, so if you worked in that office, you could get to any file in any folder. When offices needed to share files with each other, they would email the files back and forth. We also use roaming profiles for all our users. Mostly so we could swap out machines and not have many complaints about settings and preferences being different. Some users, however, occasionally visited other offices, and when they did, their profile would be transferred from their location to the machine at the office they were visiting, over the VPN tunnel on the WAN interfaces. After exploring some options, DFS Replication and DFS Namespaces in Windows Server seemed to be our best option. We could setup and define a new structure for our storage, set up the proper permissions, and even use it to replicate roaming profiles from site to site for users that needed to visit other offices from time to time. I created a top level folder, then specified folders underneath to correspond to office names, and under each office folder, we defined folders for certain functions(accounting, human resources, marketing, and operations). This structure was static and couldn't be changed, so it was what I built our permissions off of. After that, the users were told to organize under each function as they saw best. We also included a directory under each office's folder that was to have a folder for each user in the company that only they had access to for their own files and projects. I wrote GPO's to map out these DFS shares and pointed everyone to the DFS namespace so that it would pick the local file server first and refer to another server if the local server was unavailable. As for roaming profiles, I looked at who traveled between offices and decided that for those that never traveled, their profile would stay on the local file server, but for those that did travel, their profile would be put into another share that was replicated via DFS to each office so that each office had a copy of the profile. I used Cobian Backup to backup our file servers at our corporate office each night, and since all servers contained the same data, it was a backup for the entire company. Overall we achieved better resource availability through namespaces, more efficient and centralized backup, a consistent structure for the entire company, simplified collaboration with files shared between offices, and shortened login time for traveling users. *Pictures are from the Microsoft Technet article named "DFS Step-by-Step Guide for Windows Server 2008" Technology Used2
https://community.spiceworks.com/people/steventeixeira/projects/file-server-and-storage-restructure
CC-MAIN-2022-40
en
refinedweb
Example: Find ASCII value of a character public class AsciiValue { public static void main(String[] args) { char ch = 'a'; int ascii = ch; // You can also cast char to int int castAscii = (int) ch; System.out.println("The ASCII value of " + ch + " is: " + ascii); System.out.println("The ASCII value of " + ch + " is: " + castAscii); } } Output The ASCII value of a is: 97 The ASCII value of a is: 97 In the above program, character a is stored in a char variable, ch. Like, double quotes (" ") are used to declare strings, we use single quotes (' ') to declare characters. Now, to find the ASCII value of ch, we just assign ch to an int variable ascii. Internally, Java converts the character value to an ASCII value. We can also cast the character ch to an integer using (int). In simple terms, casting is converting variable from one type to another, here char variable ch is converted to an int variable castAscii. Finally, we print the ascii value using the println() function.
https://www.programiz.com/java-programming/examples/ascii-value-character
CC-MAIN-2022-40
en
refinedweb
[SOLVED] Use class without explicitly instantiating it I have a class that creates various widgets. This class' methods should be callable from everywhere (where the header is included, obviously), but without explicitly instantiating an object of the class. Just like when you include the iostream library and then simply type std::cin or std::cout, I would like to include my header and use my class with a syntax like myClass::doThis(). Can you tell me any ways that I could achieve this? I have read about the Singleton pattern but it doesn't really suit my needs. Also, using static data and members looked like a solution, but I'm not sure where to start and I couldn't find a guide that explains it right (actually, none that explains it, they only say "use static bla-bla-blah" without much or any code). - JKSH Moderators last edited by I have already read lots of articles about ways of using static members of a class, including the one you linked, JKSH. However it didn't really help, I actually couldn't understand how to use static members for my purpose. Also, searching on the web reveals that most developers prefer instead a "singleton pattern". Now, I'm not sure if what I've come up with is some singleton implementation, but surely accomplishes 90% of my needs. My idea is to use a global object. Since a widget cannot be created before a QApplication has been created, a proper init() method will serve the purpose of doing the initialization. The object is created in a global namespace (or in my own namespace) and used with extern in all sources that include my object's class. So here's the boilerplate code I've come up with. It runs just fine and serves my purpose. However, I am looking for feedback as I might be doing something horribly wrong. FILE: test.h #ifndef TEST_H #define TEST_H class QPlainTextEdit; class QString; class test{ public: test(); ~test(); void init(); void print(const QString &message); private: QPlainTextEdit * pEdit; }; #endif // TEST_H FILE: test.cpp #include "test.h" #include <QPlainTextEdit> #include <QString> test::test(){ // nothing to do in ctor right now } test::~test(){ delete pEdit; pEdit = nullptr; } void test::print(const QString &message){ pEdit->appendPlainText(message); } void test::init(){ if (pEdit == nullptr) // if pEdit was not created, create it pEdit = new QPlainTextEdit; pEdit->show(); // and show the widget } /*** this is the global object that it's meant to be used anywhere ***/ test testWidget; FILE: main.cpp #include "test.h" #include <QApplication> // this is the global object created in test.cpp extern test testWidget; int main(int argc, char *argv[]){ QApplication app(argc, argv); testWidget.init(); // initialize the object testWidget.print("hello world!"); return app.exec(); } As I was posting I noticed that the public ctor will allow other objects of this class to be instantiated, but I'm not quite sure how to instantiate that object if it has a private ctor (the compiler says it is private - and it's right). I need some advice on this as well. - Chris Kawa Moderators last edited by Chris Kawa First - if at all, you should put the extern declaration in the class header. This way you won't have to type it everywhere, just include the header. Second - you should not have the extern or a publicly accessible object at all. The singleton pattern goes something like this: //header class MySingleton { public: static MySingleton* instance(); private: std::unique_ptr<Stuff> data; }; //cpp MySingleton* MySingleton::instance { static MySingleton singletonObject; //it's ok to call private constructor here if(!singletonObject.data) singletonObject.data = ... //initialize it somehow return &singletonObject; } //usage int main(int argc, char *argv[]) { QApplication app(argc, argv); MySingleton::instance() -> .... //use it, just not before app is created return app.exec(); } - JKSH Moderators last edited by JKSH I have a class that creates various widgets. When I first read this, I thought you were after the factory pattern: // myfactory.h class MyFactory { public: static QWidget* createWidget(int type) { switch(type) { case 0: return new QMainWindow; case 1: return new QDialog; default: return new QWidget; } } } // main.cpp int main(int argc, char *argv[]){ QApplication app(argc, argv); QWidget* w1 = MyFactory::createWidget(0); QWidget* w0 = MyFactory::createWidget(1); w0->show(); w1->show(); return app.exec(); } searching on the web reveals that most developers prefer instead a "singleton pattern". ... As I was posting I noticed that the public ctor will allow other objects of this class to be instantiated, but I'm not quite sure how to instantiate that object if it has a private ctor The pattern you should choose depends on what you're trying to achieve. Notice that a singleton requires lots of boilerplate code. For your class, is it worth the effort? If you only want to print messages to a central widget, then I think it's overkill to create a singleton. You can achieve same notation by using functions in a namespace, instead of functions in a class: // printer.h #include <QString> namespace Printer { void print(const QString& message); } // printer.cpp #include "printer.h" #include <QPlainTextEdit> static QPlainTextEdit* pEdit = nullptr; // NOTE: This "static" means "private" void Printer::print(const QString& message) { if (!pEdit) { pEdit = new QPlainTextEdit; pEdit->show(); // NOTE: When the QApplication is destroyed, it automatically destroys all widgets too } pEdit->appendPlainText(message); } // main.cpp #include <QApplication> #include "printer.h" int main(int argc, char *argv[]) { QApplication app(argc, argv); Printer::print("Hello world!"); return app.exec(); } That's much fewer lines of code. The "static" might be confusing -- In my example, it means that only printer.cpp is allowed to access pEdit. It is completely different to "static" applied to class members. (That's one of the flaws of C++: too many different meanings for "static") It runs just fine and serves my purpose. However, I am looking for feedback as I might be doing something horribly wrong. I don't see anything wrong with it, aside from the unnecessary "extern" that @Chris-Kawa pointed out, and the public constructor that you already pointed out. Those aren't fatal flaws though. JKSH, you just made my day! I was coming up with a similar solution to yours, but I didn't know how to make a global object to be accessible through some functions only. So static when used within a file (or namespace) will actually make that <static> object a local private object. I could never guess it since I always thought of static applied to class/function members. No, I wasn't looking for the factory pattern, even though I might need that for some other purpose ;) I think I mislead you by using the word creates instead of... contains (?). Unfortunately, my English is not good enough at moments. The singleton pattern however was clearly overkill when it comes to code (for example, a singleton class is hardly extensible through subclassing). I also read it is anything but multithread safe (even though I couldn't understand why - I'll make sure I read some more about it). Thanks Chris for your reply as well! As final question, are both JKSH's solution and mine multithread safe? I mean, I don't care if in a multithread scenario the cronological order is not respected when printing (I read it is a common problem). I just care of having the messages printed without a print from one thread to break the print of another and loose any messages. - Chris Kawa Moderators last edited by Chris Kawa There are two aspects of thread safety here. First - creation of the singleton/global object. From the given solutions only the one I presented is thread safe and only starting from c++11. This is because in c++11 function local static variables are guaranteed to be constructed in a thread safe manner and initialized only once (a.k.a. magic statics). Unfortunately compiler support is still spotty (e.g. Visual Studio supports it starting with 2015). Second - accessing the singleton/global functions. None of the suggested approaches are thread safe. What's more even guarding it with a mutex is not enough as you can't access/create widgets from worker threads. - JKSH Moderators last edited by So static when used within a file (or namespace) will actually make that <static> object a local private object. I could never guess it since I always thought of static applied to class/function members. Not namespaces. Files only. Without "static", other files can access the widget by writing extern QPlainTextEdit* pEdit. With "static", only printer.cpp can access it. Namespaces are not relevant. This usage of "static" comes from the C language, which doesn't have classes. I think I mislead you by using the word creates instead of... contains (?). Unfortunately, my English is not good enough at moments. Not a problem :) I can understand you quite well most of the time. I also read it is anything but multithread safe (even though I couldn't understand why - I'll make sure I read some more about it). A singleton class can be made thread-safe the same way you make any other class thread-safe -- using mutexes, for example. As final question, are both JKSH's solution and mine multithread safe? Currently. they are not thread-safe because you can only construct QPlainTextEdit and call its functions in the GUI thread. (If you violate these rules, your program will crash) To make them thread-safe, you need to do 2 things: - Make sure you call an init() function from the GUI thread, to construct the QPlainTextEdit. - Replace the function calls with queued invocations: // Replace this... pEdit->appendPlainText(message); // ...with this: QMetaObject::invokeMethod(pEdit, "appendPlainText", Qt::QueuedConnection, Q_ARG(QString, message)); This makes the function run in the thread that pEdit lives in. However, invokeMethod() only works if the function is a slot, or if it is marked with Q_INVOKABLE. Multithreading is something I'll be looking deep into later. For now my problem is solved! Thank you both very much, guys!
https://forum.qt.io/topic/51859/solved-use-class-without-explicitly-instantiating-it
CC-MAIN-2022-40
en
refinedweb
. Dynamic form Hi I have a form with a FormFieldFactory and two Selects. One is productFamilyId and the other is productId. The productId item list depends on the productFamilyId value. At field creation time there is no problem because the form item, i.e. all the properties, is available public Field createField(Item item, Object propertyId, Component uiContext) { What I don't know is how to get the other field and force it to refill options. I have attached a ValueChangeListener to the productFamilyId and I know the new value when it changes. f6.addListener( new ValueChangeListener() { @Override public void valueChange(ValueChangeEvent event) { Integer newProductFamilyId = (Integer) event.getProperty().getValue(); event.getProperty().getType(); } }); I added a ValueChangeListener to form but id doesn't receive any event. Is there any way to receive an event with the current item when a property changes. I've tried to add a Property.ValueChangeListener to the form, but the IDE says that interface is not implemented. How to get access to the productId form field? Is there any other way to implement cross-field events? Any ideas? Thanks Well, I've found a way to do it: create a class that implements ValueChangeListener with access to all the form fields. So, when a Select field changes, I can fill another select. I have implemented an interface to receive all the ValueChangeEvents of the form @Override public void crossFieldChange(Form fc, ValueChangeEvent event) { Property p = event.getProperty(); try { Field f = (Field) p; if (f.getCaption().equals(f6.getCaption())) { AbstractSelect as = (Select) f7.getField(); as.removeAllItems(); List<E1> lc = service.findOptions((Integer) p.getValue()); for (E1 c: lc) { as.addItem(c.getUserId()); as.setItemCaption(c.getUserId(), "" + c.getUserId() + " " + c.getUserName()); } } } catch (Exception e) { e.printStackTrace(); } } Now when f6 select value changes, f7 select list is updated. Both fields are required. When f6 changes a "!" appears near f7 because f7 becomes blank: current f7 value does not match any option of the new list. I select a new option in f7 and the "!" disappears but then commit fails. Exception is Validator$EmptyValueException. I've checked the form doesn't like removing and adding items. Every form field is immediate, form is also immediate, not invalidCommited and not writeThrough. Any ideas? Thanks Well The cause is explained here What it is not still clear is if valueChange() call during form.commit() is a bug or a feature. Regards Here's one way to do it, if I understood your intention correctly: public class MoonBean implements java.io.Serializable { String planetName; String moonName; MoonBean() { planetName = ""; moonName = ""; } public void setPlanet(String planet) { this.planetName = planet; } public String getPlanet() { return planetName; } public void setMoon(String moon) { this.moonName = moon; } public String getMoon() { return moonName; } } void selectExample() { VerticalLayout layout = new VerticalLayout(); final String[][] planets = new String[][] { {"Mercury"}, {"Venus"}, {"Earth", "The Moon"}, {"Mars", "Phobos", "Deimos"}, {"Jupiter", "Io", "Europa", "Ganymedes", "Callisto"}, {"Saturn", "Titan", "Tethys", "Dione", "Rhea", "Iapetus"}, {"Uranus", "Miranda", "Ariel", "Umbriel", "Titania", "Oberon"}, {"Neptune", "Triton", "Proteus", "Nereid", "Larissa"}}; class MyFieldFactory implements FormFieldFactory { public Field createField(Item item, Object propertyId, Component uiContext) { final Form form = (Form) uiContext; String pid = (String) propertyId; if (pid.equals("planet")) { final ComboBox planet = new ComboBox("Planet"); planet.setNullSelectionAllowed(false); planet.setInputPrompt("-- Select a Planet --"); for (int pl=0; pl<planets.length; pl++) planet.addItem(planets[pl][0]); planet.addListener(new Property.ValueChangeListener() { public void valueChange(ValueChangeEvent event) { String selected = (String) planet.getValue(); if (selected == null) return; ComboBox moon = (ComboBox) form.getField("moon"); for (int pl=0; pl<planets.length; pl++) if (selected.equals(planets[pl][0])) { moon.removeAllItems(); moon.setInputPrompt("-- Select a Moon --"); moon.setNullSelectionAllowed(false); for (int mn=1; mn<planets[pl].length; mn++) moon.addItem(planets[pl][mn]); moon.setEnabled(planets[pl].length > 1); } } }); planet.setImmediate(true); return planet; } if (pid.equals("moon")) { ComboBox moonSel = new ComboBox("Moon"); moonSel.setEnabled(false); // Select a planet first return moonSel; } return null; } } // The form Form myform = new Form(); myform.setCaption("My Little Form"); myform.setFormFieldFactory(new MyFieldFactory()); // Create a bean to use as a data source for the form MoonBean moonbean = new MoonBean(); myform.setItemDataSource(new BeanItem<MoonBean>(moonbean)); myform.setVisibleItemProperties(new Object[] {"planet", "moon"}); layout.addComponent(myform); setCompositionRoot(layout); } See the online example Marko, I'm also interested in dynamic forms and I'm trying to understand this topic with help of your example. Selection and dynamic moon refilling works fine, but I have a problem with form.commit(): when I use commit() the moon ComboBox is emptied. Details: I added these lines of code to your example: Button apply = new Button("commit()", new Button.ClickListener() { public void buttonClick(ClickEvent event) { try {myform.commit();} catch (Exception e) {System.out.println(e.toString());} } }); addComponent(apply); After starting the application I first choose "Jupiter" and then "Callisto" as moon value. Fine! Then I press the commit button. What happens is that the first ComboBox still displays "Jupiter", but the second ComboBox displays "-- Select a Moon --" again. Is there a good way to make sure, that the dynamic filled ComboBox still shows the selected value after commit()? Thanks, Thorsten Hi Vaadin team, my congratulations on the 6.3 release! It shows great promise for the future. I hoped that this special dynamic form effect would suddenly disappearing with the new release but sadly it was not. So I would like to repeat my question... Thanks a lot, Thorsten Hi, I just want to bring up this topic because the effect I mentioned still exists in 6.4. It is difficult to create dynamic forms because of this. I hope there will be a solution/workaround in future. Thanks, Thorsten Thorsten A: Hi, I just want to bring up this topic because the effect I mentioned still exists in 6.4. It is difficult to create dynamic forms because of this. I hope there will be a solution/workaround in future. Thanks, Thorsten Isn't it a matter of storing the "moon" combobox value before clearing it and resetting it afterwards? Something like: if (selected.equals(planets[pl][0])) { -> Object moonValue = moon.getValue(); moon.removeAllItems(); moon.setInputPrompt("-- Select a Moon --"); moon.setNullSelectionAllowed(false); for (int mn = 1; mn < planets[pl].length; mn++) { moon.addItem(planets[pl][mn]); } moon.setEnabled(planets[pl].length > 1); -> moon.setValue(moonValue); } I'm also having the same problem under 6.7.0. Under form.commit, the dependant select goes empty, and the commit doesn't succeed. Also, it should be pointed out that when I want to edit the same bean again, it will execute: formsetItemDataSource(bi) Again, but this second time, an exception is thrown: Cause: java.lang.NullPointerException at com.vaadin.event.ListenerMethod.receiveEvent(ListenerMethod.java:532) at com.vaadin.event.EventRouter.fireEvent(EventRouter.java:164) at com.vaadin.ui.AbstractComponent.fireEvent(AbstractComponent.java:1219) at com.vaadin.ui.AbstractField.fireValueChange(AbstractField.java:906) at com.vaadin.ui.AbstractField.setPropertyDataSource(AbstractField.java:645) at com.vaadin.ui.Form.setItemDataSource(Form.java:770) at com.vaadin.ui.Form.setItemDataSource(Form.java:718) I'm running out of ideas or alternatives.. any help appreciated.
https://vaadin.com/forum/thread/127659/dynamic-form
CC-MAIN-2022-40
en
refinedweb
assign modules to run on specific nodes. It requires a Kubernetes cluster with Helm initialized and kubectlinstalled nodeselector # node-selector-example edge-kubernetes \ --namespace nodeselector \ --set "provisioning.deviceConnectionString=$connStr" List the nodes in your cluster. kubectl get nodes Pick one of the nodes and add a label to it like so: kubectl label nodes <node-name> edgehub=true schedule on a node with a specific label. { "": { "HostConfig": { "PortBindings": { "5671/tcp": [{ "HostPort": "5671" }], "8883/tcp": [{ "HostPort": "8883" }], "443/tcp": [{ "HostPort": "443" }] } }, + "k8s-experimental": { + "nodeSelector": { + "edgehub": "true" + } + } } } } }, "modules": {} } }, "$edgeHub": { "properties.desired": { "schemaVersion": "1.0", "routes": { "upstream": "FROM /messages/* INTO $upstream" }, "storeAndForwardConfiguration": { "timeToLiveSecs": 7200 } } } } } on the node you added the edgehub=truelabel to. You can confirm this by checking the NODE column the edgehub pod in the output from below: kubectl get pods -n nodeselector -o wide Cleanup # Cleanup helm del node-selector-example -n nodeselector && \ kubectl delete ns nodeselector ...will remove all the Kubernetes resources deployed as part of the edge deployment in this example (IoT Edge CRD will not be deleted).
https://microsoft.github.io/iotedge-k8s-doc/examples/nodeselector.html
CC-MAIN-2022-40
en
refinedweb
73430/save-image-locally-using-python-which-already-know-address Can anyone explain how to save an image locally using Python in which we already know the URL address? Hi, @Roshni, The following code is used to save the image locally from the URL address which you know.: import urllib.request urllib.request.urlretrieve("URL", "local-filename.jpg") Hi, good question. This is actually not ...READ MORE Hey. You can use requests and beautifulsoup ...READ MORE down voteacceptedFor windows: you could use winsound.SND_ASYNC to play them ...READ MORE Slicing is basically extracting particular set of ...READ MORE You can also use the random library's ...READ MORE Syntax : list. count(value) Code: colors = ['red', 'green', ...READ MORE Enumerate() method adds a counter to an ...READ MORE You can simply the built-in function in ...READ MORE Hey, Web scraping is a technique to automatically ...READ MORE Hi, @Shantanu, You can make use of the ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/73430/save-image-locally-using-python-which-already-know-address?show=73431
CC-MAIN-2022-40
en
refinedweb
> #card { font-family: Arial; font-size: 1rem; padding: 20px; border: 1px solid #ddd; } #buttons { padding: 10px 0px; } button { margin-right: 10px; } #loader { background-color: #ddf; padding: 5px; border-radius: 5px; color: #008; } #title { background-color: #333; color: #fff; padding: 5px; } #empty { background-color: #ffd; padding: 10px; color: #880; } #error { background-color: #fdd; padding: 10px; color: #800; } const Card = ({ children }) => <div id="card">{children}</div> const Title = ({ children }) => <div id="title"><h2>{children}</h2></div> const Loader = ({ children }) => <div id="loader">{children}</div> const Error = ({ children }) => <div id="error">{children}</div> const Empty = ({ children }) => <div id="empty">{children}</div> const App = () => { // initial state for loading const [dataStatus, setDataStatus] = React.useState("Loading") const [person, setPerson] = React.useState() // fake out api calls // setDataStatus("Loading") when waiting for the API const setLoading = () => { setDataStatus("Loading") setPerson() } // setDataStatus("Loaded") when successful with data const setLoaded = () => { setDataStatus("Loaded") setPerson({ name: "Luke Skywalker" }) } // setDataStatus("Empty") when successful with no data const setEmpty = () => { setDataStatus("Empty") setPerson() } // setDataStatus("Error") if an error occurs const setError = () => { setDataStatus("Error") setPerson() } // now manage the state of the component return ( <Card> { dataStatus === "Loaded" && <Title>{person.name}</Title> } { dataStatus === "Loading" && <Loader>Loading...</Loader> } { dataStatus === "Error" && <Error>Sorry, we had an oopsie...</Error> } { dataStatus === "Empty" && <Empty>Looks like we're missing something...</Empty> } <div id="buttons"> <button onClick={setLoaded}>Set to Loaded</button> <button onClick={setEmpty}>Set to Empty</button> <button onClick={setError}>Set to Error</button> <button onClick={setLoading}>Set to Loading</button> Current State: {dataStatus} </div> </Card> ) } ReactDOM.render(<App />, document.getElementById('root')) Also see: Tab Triggers
https://codepen.io/davidlozzi/pen/WNpwMjX
CC-MAIN-2022-40
en
refinedweb
a "Hello, world" scenario of deploying a simulated temperature sensor edge module. It requires a Kubernetes cluster with Helm initialized and kubectlinstalled as noted in the prerequisites. Setup steps Register an IoT Edge device and deploy the simulated temperature sensor module. Be sure to note the device's connection string. Create a Kubernetes namespace to install the edge deployment into. kubectl create ns helloworld Install IoT Edge Custom Resource Definition (CRD). helm install --repo edge-crd edge-kubernetes-crd Deploy the edge workload into the previously created K8s namespace. For simplicity, this tutorial doesn't specify a persistent store for iotedgedduring install. However, for any serious/PoC deployment, follow the best practice example shown in the iotedged failure resilience tutorial. # Store the device connection string in a variable (enclose in single quotes) export connStr='replace-with-device-connection-string-from-step-1' # Install edge deployment into the created namespace helm install --repo edge1 edge-kubernetes \ --namespace helloworld \ --set "provisioning.deviceConnectionString=$connStr" In a couple of minutes, you should see the workload modules defined in the edge deploymentment running as pods along with edgeagentand iotedged. Confirm this using: kubectl get pods -n helloworld # View the logs from the simlulated temperature sensor module kubectl logs -n helloworld replace-with-temp-sensor-pod-name simulatedtemperaturesensor Cleanup # Cleanup helm del edge1 -n helloworld && \ kubectl delete ns helloworld ...will remove all the Kubernetes resources deployed as part of the edge deployment in this example (IoT Edge CRD will not be deleted).
https://microsoft.github.io/iotedge-k8s-doc/examples/helloworld.html
CC-MAIN-2022-40
en
refinedweb
Flags all occurrences of reinterpret_cast. More... #include <ProTypeReinterpretCastCheck.h> Flags all occurrences of reinterpret_cast. For the user-facing documentation see: Definition at line 22 of file ProTypeReinterpretCastCheck.h. Definition at line 24 of file ProTypeReinterpretCastCheck.h. ClangTidyChecks that register ASTMatchers should do the actual work in here. Reimplemented from clang::tidy::ClangTidyCheck. Definition at line 23 of file ProTypeReinterpretCast 26 of file ProTypeReinterpretCast 19 of file ProTypeReinterpretCastCheck.cpp.
https://clang.llvm.org/extra/doxygen/classclang_1_1tidy_1_1cppcoreguidelines_1_1ProTypeReinterpretCastCheck.html
CC-MAIN-2022-40
en
refinedweb
Registered users can ask their own questions, contribute to discussions, and be part of the Community! Registered users can ask their own questions, contribute to discussions, and be part of the Community! I have some scenarios building some train models and generating a metric with the RMSE and i would like to launch a second scenario to builde the prediction if RMSE generate by the last scenario is good. (to launch the predictions) I don't find anything corresponding to my problem on the helpers, the only part that comes close is the png part. I thought it would be a trigger like this: from dataiku.scenario import Trigger t = Trigger() if True: ${RMSE}>4 # your condition here t.fire() With ${RMSE} the project variable that would change based on the last RMSE obtained from the train's output table? Is this kind of thing possible on Dataiku in a simple way? Thanks. Hi, If we assume that we have a project-level variable called rmse, then we could create a trigger like this to fire off a scenario when that variable is above a number (in this case 4): import dataiku from dataiku.scenario import Trigger c_vars = dataiku.get_custom_variables() t = Trigger() if float(c_vars['rmse']) > 4: t.fire() Thanks for you response. What I'm looking for is for my project variable RMSE to be equal to the last calculated metric by the first scenario.
https://community.dataiku.com/t5/Using-Dataiku/Scenario-custom-trigger-with-a-projet-variable/m-p/6409
CC-MAIN-2022-40
en
refinedweb
Introduction There is a large and ever-growing number of use cases for graph databases and many of them are centered around one important functionality: relationship traversals. While in traditional relational databases the concept of foreign keys seems like a simple and efficient idea, the truth is that they result in very complex joins and self-joins when the dataset becomes too inter-related. Graph databases offer powerful data modeling and analysis capabilities for many real-world problems such as social networks, business relationships, dependencies, shipping, logistics… and they have been adopted by many of the world's leading tech companies. The use case you'll be working on is Fraud Detection in large transaction networks. Usually, such networks contain millions of relationships between POS devices, logged transactions, and credit cards which makes it a perfect target for graph database algorithms. In this tutorial, you will learn how to build a simple Python web application from scratch. You will get a basic understanding of the technologies that are used, and see how easy it is to integrate a graph database in your development process. Since you will be building a complete web application there is a number of tools that you will need to install before getting started: -. - Memgraph DB: a native fully distributed in-memory graph database built to handle real-time use-cases at enterprise scale. Follow the Docker Installation instructions. While it's completely optional, I encourage you to also install Memgraph Lab so you can execute Cypher queries on the database directly and see visualized results. Understanding the Payment Fraud Detection Scenario First, let's define all the roles in this scenario: - Card - a credit card used for payment. - POS - a point of sale device that uses a card to execute transactions. - Transaction - a stored instance of buying something. Your application will simulate how a POS device gets compromised, then a card in contact with that POS device gets compromised as well and in the end, a fraudulent transaction is reported. Based on these reported transactions, Memgraph is used to search for the root-cause (a.k.a. the compromised POS) of the reported fraudulent transactions and all the cards that have fallen victim to it as shown below. Because this is a demo application you will create a set number of random cards, POS devices, and transactions. Some of these POS devices will be marked as compromised. If you find a compromised POS device, while searching for frauds in the network, then you'll mark the card as compromised as well. If the card is compromised, there is a 0.1% chance the transaction is fraudulent and detected (regardless of the POS device). You can then visualize all the transactions and cards connected to that POS device and resolve them as not fraudulent if need be. Defining the Graph Schema After we defined the scenario, it's time to create the graph schema! A graph schema is a "dictionary" that defines the types of entities, vertices, and edges, in the graph and how those types of entities are related to one another. Ok, so you know that there are three main entities in your model: Card, POS and Transaction. The next step is to determine how these entities are represented in the graph and how they are connected. If you are not familiar with graph databases, a good rule of thumb is to use a relational analogy to get you started. All of these entities would be separate tables in a relational database and therefore they could be separate types of nodes in a graph. And so it is! Each type of node has a different label: Card, Pos and Transaction. All of them have the property id so you can identify them. The nodes Card and Pos also have the boolean property compromised to indicate if fraudulent activity has taken place. The node Transaction has a similar boolean property with the name fraudReported. But how are these nodes connected? The nodes labeled Card and Transaction are connected via a relationship of type :USING. Internalize the meaning by reading it out loud: a transaction is executed USING a card. In the same fashion, a transaction is executed AT a POS device so the relationship between Transaction and POS is of type :AT. Using the Cypher query language notation, the data structure when there are no frauds looks like this: (:Card {compromised:false})<-[:USING]-(:Transaction)-[:AT]->(:Pos {compromised: false}) The data structure when frauds occur: (:Card {compromised:true})<-[:USING]-(:Transaction)-[:AT]->(:Pos {compromised: true}) (:Card {compromised:true})<-[:USING]-(:Transaction {fraudReported:true})-[:AT]->(:Pos) Building the Web Application Backbone This is presumably the easy part. You need to create a simple Python web application using Flask to be your server. Let's start by creating a root directory for your project and naming it card_fraud. There you need to create a requirements.txt file containing the necessary PIP installs. For now, only one line is needed: Flask==1.1.2 You can install the specified package by running: pip3 install -r requirements.txt Add a new file to your root directory with the name card_fraud.py and the following code: from flask import Flask app = Flask(__name__) @app.route('/') @app.route('/index') def index(): return "Hello World" You are probably rolling your eyes while reading this, but don't mock the Hello World example! Let's compile and run your server to see if everything works as expected. Open a terminal, position yourself in the root directory and execute the following two commands: export FLASK_APP=card_fraud.py export FLASK_ENV=development This way, you have defined the entry point of your app and set the environment to development. This will enable development features like automatic code reloading. Don't forget to change this to production when you're ready to deploy your app. To run the server, execute: flask run --host 0.0.0.0 You should see a message similar to the following, indicating that your server is up and running: * Serving Flask app "card_fraud" * Running on Dockerizing the Application In the root directory of the project create two files, Dockerfile and docker-compose.yml. At the beginning of the Dockerfile, you specify the parent image and instruct the container to install CMake, mgclient, and pymgclient. CMake and mgclient are necessary to install pymgclient, the Python driver for Memgraph DB. You don’t have to focus too much on this part, just copy the code to your Dockerfile: pymgclient RUN git clone /pymgclient && \ cd pymgclient && \ python3 setup.py build && \ python3 setup.py install # Install packages COPY requirements.txt ./ RUN pip3 install -r requirements.txt COPY card_fraud.py /app/card_fraud.py WORKDIR /app ENV FLASK_ENV=development ENV LC_ALL=C.UTF-8 ENV LANG=C.UTF-8 ENTRYPOINT ["python3", "card_fraud.py"] If you are not familiar with Docker, do yourself a favor and take look at this: Getting started with Docker. Next, this project, you'll" card_fraud: specifically, your service card_fraud need the database to start before the web application. The build key allows us to tell Compose where to find the build instructions as well as the files and/or folders used during the build process. By using the volumes key, you bypass the need to constantly restart your image to load new changes to it from the host machine. Congratulations, you now have a dockerized app! This approach is great for development because it enables you to run your project on completely different operating systems and environments without having to worry about compatibility issues. To make sure we are on the same page, your project structure should look like this: card_fraud ├── card_fraud.py ├── docker-compose.yml ├── Dockerfile └── requirements.txt Let’s start your app to make sure you don’t have any errors. In the project root directory execute: docker-compose build The first build will take some time because Docker has to download and install a lot of dependencies. After it finishes run: docker-compose up The URL of your web application is. You should see the message Hello World which means that the app is up and running correctly. Defining the Bussines Logic At this point, you have a basic web server and a database instance. It's time to add some useful functionalities to your app. To communicate with the database, your app needs some kind of OGM - Object Graph Mapping system. You can just reuse this one: custom OGM. Add the database directory with all of its contents to the root directory of your project. Also, delete the contents of card_fraud.py because you are starting from scratch. Let's fetch the environment variables you defined in the docker-compose.yml file by adding the following code in card_fraud.py: import os MG_HOST = os.getenv('MG_HOST', '127.0.0.1') MG_PORT = int(os.getenv('MG_PORT', '7687')) MG_USERNAME = os.getenv('MG_USERNAME', '') MG_PASSWORD = os.getenv('MG_PASSWORD', '') MG_ENCRYPTED = os.getenv('MG_ENCRYPT', 'false').lower() == 'true' No web application is complete without logging, so let's at least add the bare minimum: import logging import time log = logging.getLogger(__name__) def init_log(): logging.basicConfig(level=logging.INFO) log.info("Logging enabled") logging.getLogger("werkzeug").setLevel(logging.WARNING) init_log() It would also be convenient to add an input argument parser so you can run the app with different configurations without hardcoding them. Add the following import and function: from argparse import ArgumentParser def parse_args(): ''' Parse command-line arguments. ''' parser = ArgumentParser(description=__doc__) parser.add_argument("--app-host", default="0.0.0.0", help="Allowed host addresses.") parser.add_argument("--app-port", default=5000, type=int, help="App port.") parser.add_argument("--template-folder", default="public/template", help="The folder with flask templates.") parser.add_argument("--static-folder", default="public", help="The folder with flask static files.") parser.add_argument("--debug", default=True, action="store_true", help="Run web server in debug mode") parser.add_argument('--clean-on-start', action='store_true', help='Should the DB be emptied on script start') print(__doc__) return parser.parse_args() args = parse_args() Now, you can connect to your database and create an instance of a Flask server by adding the following code: from flask import Flask, Response, request, render_template from database import Memgraph db = Memgraph(host=MG_HOST, port=MG_PORT, username=MG_USERNAME, password=MG_PASSWORD, encrypted=MG_ENCRYPTED) app = Flask(__name__, template_folder=args.template_folder, static_folder=args.static_folder, static_url_path='') Finally, you come to the business logic and all the interesting functions. Get ready because there are many things you need to implement. If you'd rather just copy them and read their descriptions later, that's fine too. You can find the complete card_fraud.py script here and can continue the tutorial on this section. Clearing the Database You need to start with an empty database so let's implement a function to drop all the existing data from it: def clear_db(): """Clear the database.""" db.execute_query("MATCH (n) DETACH DELETE n") log.info("Database cleared") Adding Initial Cards and POS Devices There is a fixed number of initial cards and POS devices that need to be added to the database at the beginning. def init_data(card_count, pos_count): """Populate the database with initial Card and POS device entries.""" log.info("Initializing {} cards and {} POS devices".format( card_count, pos_count)) start_time = time.time() db.execute_query("UNWIND range(0, {} - 1) AS id " "CREATE (:Card {{id: id, compromised: false}})".format( card_count)) db.execute_query("UNWIND range(0, {} - 1) AS id " "CREATE (:Pos {{id: id, compromised: false}})".format( pos_count)) log.info("Initialized data in %.2f sec", time.time() - start_time) Adding a Single Compromised POS Device You need the option of changing the property compromised of a POS device to true given that all of them are initialized as false at the beginning. def compromise_pos(pos_id): """Mark a POS device as compromised.""" db.execute_query( "MATCH (p:Pos {{id: {}}}) SET p.compromised = true".format(pos_id)) log.info("Point of sale %d is compromised", pos_id) Adding Multiple Random Compromised POS Devices You can also compromise a set number of randomly selected POS devices at once. from random import sample def compromise_pos_devices(pos_count, fraud_count): """Compromise a number of random POS devices.""" log.info("Compromising {} out of {} POS devices".format( fraud_count, pos_count)) start_time = time.time() compromised_devices = sample(range(pos_count), fraud_count) for pos_id in compromised_devices: compromise_pos(pos_id) log.info("Compromisation took %.2f sec", time.time() - start_time) Adding Credit Card Transactions This is where the main analysis for fraud detection happens. If the POS device is compromised, then the card in the transaction gets compromised too. If the card is compromised, there is a 0.1% chance the transaction is fraudulent and detected (regardless of the POS device). from random import randint def pump_transactions(card_count, pos_count, tx_count, report_pct): """Create transactions. If the POS device is compromised, then the card in the transaction gets compromised too. If the card is compromised, there is a 0.1% chance the The transaction is fraudulent and detected (regardless of the POS device).""" log.info("Creating {} transactions".format(tx_count)) start_time = time.time() query = ("MATCH (c:Card {{id: {}}}), (p:Pos {{id: {}}}) " "CREATE (t:Transaction " "{{id: {}, fraudReported: c.compromised AND (rand() < %f)}}) " "CREATE (c)<-[:Using]-(t)-[:At]->(p) " "SET c.compromised = p.compromised" % report_pct) def rint(max): return randint(0, max - 1) for i in range(tx_count): db.execute_query(query.format(rint(card_count), rint(pos_count), i)) duration = time.time() - start_time log.info("Created %d transactions in %.2f seconds", tx_count, duration) Resolving Transactions and Cards on a POS Device You also need to have the functionality to resolve suspected fraud cases. This means marking all the connected components of a POS device as not compromised if they are cards and not fraudulent if they are transactions. This function is triggered by a POST request to the URL /resolve-pos. The request body contains the variable pos which specifies the id of the POS device. import json @app.route('/resolve-pos', methods=['POST']) def resolve_pos(): """Resolve a POS device and card as not compromised.""" data = request.get_json(silent=True) start_time = time.time() db.execute_query("MATCH (p:Pos {{id: {}}}) " "SET p.compromised = false " "WITH p MATCH (p)--(t:Transaction)--(c:Card) " "SET t.fraudReported = false, c.compromised = false".format(data['pos'])) duration = time.time() - start_time log.info("Compromised Point of sale %s has been resolved in %.2f sec", data['pos'], duration) response = {"duration": duration} return Response( json.dumps(response), status=201, mimetype='application/json') Fetching all Compromised POS Devices This function searches the database for all POS devices that have more than one fraudulent transaction connected to them. It's is triggered by a GET request to the URL /get-compromised-pos. @app.route('/get-compromised-pos', methods=['GET']) def get_compromised_pos(): """Get compromised POS devices.""" log.info("Getting compromised Point Of Service IDs") start_time = time.time() data = db.execute_and_fetch("MATCH (t:Transaction {fraudReported: true})-[:Using]->(:Card)" "<-[:Using]-(:Transaction)-[:At]->(p:Pos) " "WITH p.id as pos, count(t) as connected_frauds " "WHERE connected_frauds > 1 " "RETURN pos, connected_frauds ORDER BY connected_frauds DESC") data = list(data) log.info("Found %d POS with more then one fraud in %.2f sec", len(data), time.time() - start_time) return json.dumps(data) Fetching all Fraudulent Transaction With a very simple query, you can return all the transactions that are marked as fraudulent. The function is triggered by a GET request to the URL /get-fraudulent-transactions. @app.route('/get-fraudulent-transactions', methods=['GET']) def get_fraudulent_transactions(): """Get fraudulent transactions.""" log.info("Getting fraudulent transactions") start_time = time.time() data = db.execute_and_fetch( "MATCH (t:Transaction {fraudReported: true}) RETURN t.id as id") data = list(data) duration = time.time() - start_time log.info("Found %d fraudulent transactions in %.2f", len(data), duration) response = {"duration": duration, "fraudulent_txs": data} return Response( json.dumps(response), status=201, mimetype='application/json') Generating Demo Data Your app will have an option to generate a specified number of cards, POS devices, and transactions, so you need a function that will be responsible for creating them and marking a number of them as compromised/fraudulent. It's triggered by a POST request to the URL /generate-data. The request body contains the variables: pos: specifies the number of the POS device. frauds: specifies the number of compromised POS devices. cards: specifies the number of the cards. transactions: specifies the number of the transactions. reports: specifies the number of reported transactions. @app.route('/generate-data', methods=['POST']) def generate_data(): """Initialize the database.""" data = request.get_json(silent=True) if data['pos'] < data['frauds']: return Response( json.dumps( {'error': "There can't be more frauds than devices"}), status=418, mimetype='application/json') start_time = time.time() clear_db() init_data(data['cards'], data['pos']) compromise_pos_devices(data['pos'], data['frauds']) pump_transactions(data['cards'], data['pos'], data['transactions'], data['reports']) duration = time.time() - start_time response = {"duration": duration} return Response( json.dumps(response), status=201, mimetype='application/json') Fetching POS device Connected Components This function finds all the connected components of a compromised POS device and returns them to the client. It's triggered by a POST request to the URL /pos-graph. @app.route('/pos-graph', methods=['POST']) def host(): log.info("Client fetching POS connected components") request_data = request.get_json(silent=True) data = db.execute_and_fetch("MATCH (p1:Pos)<-[:At]-(t1:Transaction {{fraudReported: true}})-[:Using] " "->(c:Card)<-[:Using]-(t2:Transaction)-[:At]->(p2:Pos {{id: {}}})" "RETURN p1, t1, c, t2, p2".format(request_data['pos'])) data = list(data) output = [] for item in data: p1 = item['p1'].properties t1 = item['t1'].properties c = item['c'].properties t2 = item['t2'].properties p2 = item['p2'].properties print(p2) output.append({'p1': p1, 't1': t1, 'c': c, 't2': t2, 'p2': p2}) return Response( json.dumps(output), status=200, mimetype='application/json') Rendering Views These functions will return the requested view. More on them in the Client-side Logic section. They are triggered by GET requests to the URLs / and /graph. @app.route('/', methods=['GET']) def index(): return render_template('index.html') @app.route('/graph', methods=['GET']) def graph(): return render_template('graph.html', pos=request.args.get('pos'), frauds=request.args.get('frauds')) Creating the Main Function The function main() has three jobs: - Clear the database if so specified in the input arguments. - Create indexes for the nodes Card, Posand Transaction. You can learn more about indexing here. - Start the Flask server with the specified arguments. def main(): if args.clean_on_start: clear_db() db.execute_query("CREATE INDEX ON :Card(id)") db.execute_query("CREATE INDEX ON :Pos(id)") db.execute_query("CREATE INDEX ON :Transaction(fraudReported)") app.run(host=args.app_host, port=args.app_port, debug=args.debug) if __name__ == "__main__": main() Adding the Client-Side Logic Now, that your server is ready, let's create the client-side logic for your web application. I'm sure that you're not here for a front-end tutorial and therefore I leave it up to you to experiment and get to know the individual components. Just copy this public directory with all of its contents to the root directory of your project and add the following code to the Dockerfile under the line RUN pip3 install -r requirements.txt: COPY public /app/public Just to get you started, here is a basic summary of the main components in the public directory: img: this directory contains images and animations. js: this directory contains the JavaScript scripts. graph.js: this script handles the graph.htmlpage. It fetches all the connected components of a POS device, renders them in the form of a graph, and can resolve a POS device and all of its connected components as not fraudulent/compromised. index.js: this script handles the index.htmlpage. It initializes all of the necessary components, tells the server to generate the initial data, and fetches the fraudulent transactions. render.js: this script handles the graph rendering on the graph.htmlpage using the D3.js library. libs: this directory contains all the locally stored libraries your application uses. For the purpose of this tutorial we only included the memgraph-designlibrary to style your pages. template: this directory contains the HTML pages. graph.html: this is the page that renders a graph of a compromised POS device with all of its connected components. index.html: this is the main page of the application. In it, you can generate new demo data and retrieve the compromised POS devices. Starting the App It's time to test your app. First, you need to build the Docker image by executing: docker-compose build Now, you can run the app with the following command: docker-compose up The app is available on the address 0.0.0.0:5000. Hopefully, you see a screen similar to the image below and are smiling because you just finished your graph-powered credit card fraud detection web application! Conclusion Relational database-management systems model data as a set of predetermined structures. Complex joins and self-joins are necessary when the dataset becomes too inter-related. Modern datasets require technically complex queries which are often very inefficient in real-time scenarios. A graph database is the perfect solution for such complex and large networks. From the underlying storage capabilities to the built-in graph algorithms, every aspect of a graph database is fine-tuned to deliver the best experience and performance when dealing with such problems. In this, you built a graph-powered credit card fraud detection application from scratch using Memgraph, Flask, and D3.js. You got a good overview of the end-to-end development process using a graph database, and hopefully some ideas for your own projects. We can't wait to see what other graph applications you come up with! As mentioned at the beginning of this tutorial, feel free to ask us any questions about this tutorial or Memgraph in general on StackOverflow with the tag memgraphdb or on our official forum. Happy coding! Top comments (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/gdespot/how-to-develop-a-credit-card-fraud-detection-application-using-memgraph-flask-and-d3-js-4n17
CC-MAIN-2022-40
en
refinedweb
209 votes 1 answers css - Round up a variable with Shopify Liquid I wish to assign a dummy variable to a math value so I can then take the ceiling. My current code is: {% if variant.compare_at_price > variant.price %} SAVE {{ variant.compare_at_price | minus:variant.price | times:100 | divided_by:variant.compare_at_price }}% {% endif %} Output is SAVE 20% (for example, but if it's 19.99 it'll be 19% rather than 20%) But I want to call: x= {{ variant.compare_at_price | minus:variant.price | times:100 | divided_by:variant.compare_at_price }}% then take {{ x | ceil }} How do I assign x? Undefined asked 443 votes Answer Solution: By using the {% assign x = variant.compare_at_price | minus:variant.price | times:100.0 | divided_by:variant.compare_at_price %} {% assign x = x | ceil %} SAVE {{x}}% Undefined answered Source Didn't find the answer? Our community is visited by hundreds of Shopify development professionals every day. Ask your question and get a quick answer for free. Similar questions Find the answer in similar questions on our website. 374 javascript - How To Put Add Fill-Up Form If Counts Of Item Is More Than 1 And If Item.Product.Tag Condition Met in shopify? 33 javascript - Swap Sticky Product__media-Wrapper With Product__info-Wrapper in Shopify Liquid Layout Write quick answer Do you know the answer to this question? Write a quick response to it. With your help, we will make our community stronger.
https://e1commerce.com/items/round-up-a-variable-with-shopify-liquid
CC-MAIN-2022-40
en
refinedweb
Advanced VR Mechanics With Unity and the HTC Vive Part 1 VR is more popular than ever, and making games has never been easier. But to offer a really immersive experience, your in-game mechanics and physics need to feel very, very real, especially when you’re interacting with in-game objects. In the first part of this advanced HTC Vive tutorial, you’ll learn how to create an expandable interaction system and implement multiple ways to grab virtual objects inside that system, and fling them around like nobody’s business. By the time you’re done, you’ll have some flexible interaction systems that you can use right in your own VR projects! Getting Started You’ll need the following things for this tutorial: - A copy of Unity 5.6.0f3 (or better) installed on your machine. - A HTC Vive with controllers that are set up, powered on, and ready to go. If you haven’t worked with the HTC Vive before, you might want to check out this previous HTC Vive tutorial to get a feel for the basics of working with the HTC Vive in Unity. The HTC Vive is one of the best head-mounted displays at the moment and offers an excellent immersive experience because of its room-scale gameplay capabilities. Download the starter project, unzip it somewhere and open the folder inside Unity. Take a look at the folder structure in the Project window: Here’s what each contains: - Materials: All the materials for the scene. - Models: All models for this tutorial. - Prefabs: For now, this only contains the prefab for the poles that are scattered around the level. You’ll place your own objects in here for later use. - Scenes: The game scene and some lighting data. - Scripts: A few premade scripts; you’ll save your own scripts in here as well. - Sounds: The sound for shooting an arrow from the bow. - SteamVR: The SteamVR plugin and all related scripts, prefabs and examples. - Textures: Contains the main texture shared by almost all models (for the sake of efficiency) as well as the texture for the book object. Open up the Game scene inside the Scenes folder. Look at the Game view and you’ll notice there’s no camera present in the scene: In the next section you’ll fix this by adding everything necessary for the HTC Vive to work. Scene Setup Select and drag the [CameraRig] and [SteamVR] prefabs from the SteamVR\Prefabs folder to the Hierarchy. The camera rig will now be on the ground, but it should be on the wooden tower. Change the position of [CameraRig] to (X:0, Y:3.35, Z:0) to correct this. This is what it should look like in the Game view: Now save the scene and press the play button to test if everything works as intended. Be sure to look around and use at least one controller to see if you can see the in-game controller moving around. If the controllers didn’t work, don’t panic! At the time of writing, there’s a bug in the latest SteamVR plugin (version 1.2.1) when using Unity 5.6 which causes the movement of the controllers to not register. To fix this, select Camera (eye) under [CameraRig]/Camera (head) and add the SteamVR_Update_Poses component to it: This script manually updates the position and rotation of the controllers. Try playing the scene again, and things should work much better. Before doing any scripting, take a look at these tags in the project: These tags make it easier to detect which type of object collided or triggered with another. Interaction System: InteractionObject An interaction system allows for a flexible, modular approach to interactions between the player and objects in the scene. Instead of rewriting the boilerplate code for every object and the controllers, you’ll be making some classes from which other scripts can be derived. The first script you’ll be making is the RWVR_InteractionObject class; all objects that can be interacted with will be derived from this class. This base class will hold some essential variables and methods. Create a new folder in the Scripts folder and name it RWVR. Create a new C# script in there and name it RWVR_InteractionObject. Open up the script in your favorite code editor and remove both the Start() and Update() methods. Add the following variables to the top of the script, right underneath the class declaration: protected Transform cachedTransform; // 1 [HideInInspector] // 2 public RWVR_InteractionController currentController; // 3 You’ll probably get an error saying RWVR_InteractionController couldn’t be found. Ignore this for now, as you’ll be creating that class next. Taking each commented line in turn: - You cache the value of the transform to improve performance. - This attribute makes the variable underneath invisible in the Inspector window, even though it’s public. - This is the controller this object is currently interacting with. You’ll visit the controller in detail later on. Save this script for now and return to the editor. Create a new C# script inside the RWVR folder named RWVR_InteractionController. Open it up, remove the Start() and Update() methods and save your work. Open the RWVR_InteractionObject script again, and the error you received before should be gone. Now add the following three methods below the variables you just added: public virtual void OnTriggerWasPressed(RWVR_InteractionController controller) { currentController = controller; } public virtual void OnTriggerIsBeingPressed(RWVR_InteractionController controller) { } public virtual void OnTriggerWasReleased(RWVR_InteractionController controller) { currentController = null; } These methods will be called by the controller when its trigger is either pressed, held or released. A reference to the controller is stored when it’s pressed, and removed again when it’s released. All of these methods are virtual and will be overridden by more sophisticated scripts later on so they can benefit from these controller callbacks. Add the following method below OnTriggerWasReleased: public virtual void Awake() { cachedTransform = transform; // 1 if (!gameObject.CompareTag("InteractionObject")) // 2 { Debug.LogWarning("This InteractionObject does not have the correct tag, setting it now.", gameObject); // 3 gameObject.tag = "InteractionObject"; // 4 } } Taking it comment-by-comment: - Cache the transform for better performance. - Check to see if this InteractionObject has the proper tag assigned. Execute the code below if it doesn’t. - Log a warning in the inspector to warn the developer of a forgotten tag. - Assign the tag just in time so this object functions as expected. The interaction system will depend heavily upon the InteractionObject and Controller tags to differentiate those special objects from the rest of the scene. It’s quite easy to forget to add this tag to objects every time you add a script to it. That’s why this failsafe is in place. Better to be safe than sorry! :] Finally, add these methods below Awake(): public bool IsFree() // 1 { return currentController == null; } public virtual void OnDestroy() // 2 { if (currentController) { OnTriggerWasReleased(currentController); } } Here’s what these methods do: - This is a public Boolean that indicates whether or not this object is currently in use by a controller. - When this object gets destroyed, you release it from the current controller (if there are any). This helps to avoid weird bugs later on when working with objects that can be held. Save this script and open the RWVR_InteractionController script again. It’s empty at the moment. But you’ll soon fill it up with functionality! Interaction System: Controller The controller script might be the most important piece of all, as it’s the direct link between the player and the game. It’s important to make use of as much input as possible and return appropriate feedback to the player. To start off, add the following variables below the class declaration: public Transform snapColliderOrigin; // 1 public GameObject ControllerModel; // 2 [HideInInspector] public Vector3 velocity; // 3 [HideInInspector] public Vector3 angularVelocity; // 4 private RWVR_InteractionObject objectBeingInteractedWith; // 5 private SteamVR_TrackedObject trackedObj; // 6 Looking at each piece in turn: - Save a reference to the tip of the controller. You’ll be adding a transparent sphere later, which will act as a guide to where and how far you can reach: - This is the visual representation of the controller, seen in white above. - This is the speed and direction of the controller. You’ll use this to calculate how objects should fly when you throw them. - This is the rotation of the controller, also used when calculating the motion of thrown objects. - This is the InteractionObject this controller is currently interacting with. You use it to send events to the active object. - SteamVR_TrackedObject can be used to get a reference to the actual controller. Add this code below the variables you just added: private SteamVR_Controller.Device Controller // 1 { get { return SteamVR_Controller.Input((int)trackedObj.index); } } public RWVR_InteractionObject InteractionObject // 2 { get { return objectBeingInteractedWith; } } void Awake() // 3 { trackedObj = GetComponent<SteamVR_TrackedObject>(); } Here’s what’s going on in the code above: - This variable acts as a handy shortcut to the actual SteamVR controller class from the tracked object. - This returns the InteractionObject this controller is currently interacting with. It’s been encapsulated to ensure it stays read-only for other classes. - Finally, save a reference to the TrackedObject component attached to this controller to use later. Now add the following method: private void CheckForInteractionObject() { Collider[] overlappedColliders = Physics.OverlapSphere(snapColliderOrigin.position, snapColliderOrigin.lossyScale.x / 2f); // 1 foreach (Collider overlappedCollider in overlappedColliders) // 2 { if (overlappedCollider.CompareTag("InteractionObject") && overlappedCollider.GetComponent<RWVR_InteractionObject>().IsFree()) // 3 { objectBeingInteractedWith = overlappedCollider.GetComponent<RWVR_InteractionObject>(); // 4 objectBeingInteractedWith.OnTriggerWasPressed(this); // 5 return; // 6 } } } This method searches for InteractionObjects in a certain range from the controller’s snap collider. Once it finds one, it populates the objectBeingInteractedWith with a reference to it. Here’s what each line does: - Creates a new array of colliders and fills it with all colliders found by OverlapSphere()at the position and scale of the snapColliderOrigin, which is the transparent sphere shown above that you’ll add shortly. - Iterates over the array. - If any of the found colliders has an InteractionObject tag and is free, continue. - Saves a reference to the RWVR_InteractionObject attached to the object that was overlapped in objectBeingInteractedWith. - Calls OnTriggerWasPressedon objectBeingInteractedWithand gives it the current controller as a parameter. - Breaks out of the loop once an InteractionObject is found. Add the following method that makes use of the code you just added: void Update() { if (Controller.GetHairTriggerDown()) // 1 { CheckForInteractionObject(); } if (Controller.GetHairTrigger()) // 2 { if (objectBeingInteractedWith) { objectBeingInteractedWith.OnTriggerIsBeingPressed(this); } } if (Controller.GetHairTriggerUp()) // 3 { if (objectBeingInteractedWith) { objectBeingInteractedWith.OnTriggerWasReleased(this); objectBeingInteractedWith = null; } } } This is fairly straightforward: - When the trigger is pressed, call CheckForInteractionObject()to prepare for a possible interaction. - While the trigger is held down and there’s an object being interacted with, call the object’s OnTriggerIsBeingPressed(). - When the trigger is released and there’s an object that’s being interacted with, call that object’s OnTriggerWasReleased()and stop interacting with it. These checks make sure that all of the player’s input is being passed to any InteractionObjects they are interacting with. Add these two methods to keep track of the controller’s velocity and angular velocity: private void UpdateVelocity() { velocity = Controller.velocity; angularVelocity = Controller.angularVelocity; } void FixedUpdate() { UpdateVelocity(); } FixedUpdate() calls UpdateVelocity() every frame at the fixed framerate, which updates the velocity and angularVelocity variables. Later, you’ll pass these values to a RigidBody to make thrown objects move more realistically. Sometimes you’ll want to hide a controller to make the experience more immersive and avoid blocking your view. Add the following two methods below the previous ones: public void HideControllerModel() { ControllerModel.SetActive(false); } public void ShowControllerModel() { ControllerModel.SetActive(true); } These methods simply enable or disable the GameObject representing the controller. Finally, add the following two methods: public void Vibrate(ushort strength) // 1 { Controller.TriggerHapticPulse(strength); } public void SwitchInteractionObjectTo(RWVR_InteractionObject interactionObject) // 2 { objectBeingInteractedWith = interactionObject; // 3 objectBeingInteractedWith.OnTriggerWasPressed(this); // 4 } Here’s how these methods work: - This method makes the piezoelectric linear actuators (no, I’m not making that up) inside the controller vibrate for a certain amount of time. The longer it vibrates, the stronger the vibration feels. Its range is between 1 and 3999. - This switches the active InteractionObject to the one specified in the parameter. - This makes the specified InteractionObject the active one. - Call OnTriggerWasPressed()on the newly assigned InteractionObject and pass this controller. Save this script and return to the editor. In order to get the controllers working as intended, you’ll need to make a few adjustments. Select both controllers in the Hierarchy. They’re both children of [CameraRig]. Add a Rigidbody component to both. This will allow them to work with fixed joints and interact with other physics objects. Uncheck Use Gravity and check Is Kinematic. The controllers don’t need be to affected by physics since they’re strapped to your hands in real life. Now add the RWVR_Interaction Controller component to the controllers. You’ll configure those in a bit. Unfold Controller (left) and add a Sphere to it as its child by right-clicking it and selecting 3D Object > Sphere. Select Sphere, name it SnapOrigin and press F to focus on it in the Scene view. You should see a big white hemisphere at the center of the platform floor: Set its Position to (X:0, Y:-0.045, Z:0.001) and its Scale to (X:0.1, Y:0.1, Z:0.1). This will position the sphere right at the front of the controller. Remove the Sphere Collider component, as all physics checks are done in code. Finally, make the sphere transparent by applying the Transparent material to its Mesh Renderer. Now duplicate SnapOrigin and drag SnapOrigin (1) to Controller (right) to make it a child of the right controller. Name it SnapOrigin. The final step is to set up the controllers to make use of their Model and SnapOrigin. Select and unfold Controller (left), drag its child SnapOrigin to the Snap Collider Origin slot and drag Model to the Controller Model slot. Do the same for Controller (right). Now for a bit of fun! Power on your controllers and run the scene. Move the controllers in front of the HMD to check if the spheres are clearly visible and attached to the controllers. When you’re done testing, save the scene and prepare to actually use the interaction system! Grabbing Objects Using The Interaction System You may have noticed these objects laying around: You can take a good look at them, but you can’t pick them up yet. You’d better fix that soon, or how will you ever learn how awesome our Unity book is?! :] In order to interact with rigidbodies like these, you’ll need to create a new derivative class of RWVR_InteractionObject that will let you grab and throw objects. Create a new C# script in the Scripts/RWVR folder and name it RWVR_SimpleGrab. Open it up in your code editor and remove the Start() and Update() methods. Replace the following: public class RWVR_SimpleGrab : MonoBehaviour …with: public class RWVR_SimpleGrab : RWVR_InteractionObject This makes this script derive from RWVR_InteractionObject, which provides all the hooks onto the controller’s input so it can appropriately handle the input. Add these variables below the class declaration: public bool hideControllerModelOnGrab; // 1 private Rigidbody rb; // 2 Quite simply: - A flag indicating whether or not the controller model should be hidden when this object is picked up. - Cache the Rigidbody component for performance and ease of use. Add the following methods below those variables: public override void Awake() { base.Awake(); // 1 rb = GetComponent<Rigidbody>(); // 2 } Short and sweet: - Call Awake()on the base class RWVR_InteractionObject. This caches the object’s Transform component and checks if the InteractionObject tag is assigned. - Store the attached Rigidbody component for later use. Now you need some helper methods that will attach and release the object to and from the controller by using a FixedJoint. Add the following methods below Awake(): private void AddFixedJointToController(RWVR_InteractionController controller) // 1 { FixedJoint fx = controller.gameObject.AddComponent<FixedJoint>(); fx.breakForce = 20000; fx.breakTorque = 20000; fx.connectedBody = rb; } private void RemoveFixedJointFromController(RWVR_InteractionController controller) // 2 { if (controller.gameObject.GetComponent<FixedJoint>()) { FixedJoint fx = controller.gameObject.GetComponent<FixedJoint>(); fx.connectedBody = null; Destroy(fx); } } Here’s what these methods do: - This method accepts a controller to “stick” to as a parameter and then proceeds to create a FixedJoint component. Attach it to the controller, configure it so it won’t break easily and finally connect it to the current InteractionObject. The reason you set a finite break force is to prevent users from moving objects through other solid objects, which might result in weird physics glitches. - The controller passed as a parameter is relieved from its FixedJoint component (if there is one). The connection to this object is removed and the FixedJoint is destroyed. With those methods in place, you can take care of the actual player input by implementing some OnTrigger methods from the base class. To start off with, add OnTriggerWasPressed(): public override void OnTriggerWasPressed(RWVR_InteractionController controller) // 1 { base.OnTriggerWasPressed(controller); // 2 if (hideControllerModelOnGrab) // 3 { controller.HideControllerModel(); } AddFixedJointToController(controller); // 4 } This method adds the FixedJoint when the player presses the trigger button to interact with the object. Here’s what you do in each part: - Override the base OnTriggerWasPressed()method. - Call the base method to intialize the controller. - If the hideControllerModelOnGrabflag was set, hide the controller model. - Add a FixedJoint to the controller. The final step for this simple grab class is to add OnTriggerWasReleased(): public override void OnTriggerWasReleased(RWVR_InteractionController controller) // 1 { base.OnTriggerWasReleased(controller); //2 if (hideControllerModelOnGrab) // 3 { controller.ShowControllerModel(); } rb.velocity = controller.velocity; // 4 rb.angularVelocity = controller.angularVelocity; RemoveFixedJointFromController(controller); // 5 } This method removes the FixedJoint and passes the controller’s velocities to create a realistic throwing effect. Comment-by-comment: - Override the base OnTriggerWasReleased()method. - Call the base method to unassign the controller. - If the hideControllerModelOnGrabflag was set, show the controller model again. - Pass the controller’s velocity and angular velocity to this object’s rigidbody. This means the object will react in a realistic manner when you release. For example, if you’re throwing a ball, you move the controller from back-to-front in an arc. The ball should gain rotation and a forward-acting force as if you had passed your actual kinetic energy in real life. - Remove the FixedJoint. Save this script and return to the editor. The dice and books are linked to their respective prefabs in the Prefabs folder. Open this folder in the Project view: Select the Book and Die prefabs and add the RWVR_Simple Grab component to both. Also enable Hide Controller Model. Save and run the scene. Try grabbing some of the books and dice and throwing them around. In the next section I’ll explain another way of grabbing objects: via snapping. Grabbing and Snapping Objects Grabbing objects at the position and rotation of your controller usually works, but in some cases snapping the object to a certain position might be desirable. For example, when the player sees a gun, they would expect the gun to be pointing in the right direction once they’ve picked it up. This is where snapping comes into play. In order for snapping to work, you’ll need to create another script. Create a new C# script inside the Scripts/RWVR folder and name it RWVR_SnapToController. Open it in your favorite code editor and remove the Start() and Update() methods. Replace the following: public class RWVR_SnapToController : MonoBehaviour ..with: public class RWVR_SnapToController : RWVR_InteractionObject This lets this script use all of the InteractionObject capabilities. Add the following variable declarations: public bool hideControllerModel; // 1 public Vector3 snapPositionOffset; // 2 public Vector3 snapRotationOffset; // 3 private Rigidbody rb; // 4 Here’s what these variables are for: - A flag to tell whether the controller’s model should be hidden once the player grabs this object. - The position added after snapping. The object snaps to the controller’s position by default. - Same as above, except this handles the rotation. - A cached reference of this object’s Rigidbody component. Add the following method below the variables: public override void Awake() { base.Awake(); rb = GetComponent<Rigidbody>(); } Just as in the SimpleGrab script, this overrides the base Awake() method, calls the base and caches the RigidBody component. Next up are the helper methods, which form the real meat of this script. Add the following method below Awake(): private void ConnectToController(RWVR_InteractionController controller) // 1 { cachedTransform.SetParent(controller.transform); // 2 cachedTransform.rotation = controller.transform.rotation; // 3 cachedTransform.Rotate(snapRotationOffset); cachedTransform.position = controller.snapColliderOrigin.position; // 4 cachedTransform.Translate(snapPositionOffset, Space.Self); rb.useGravity = false; // 5 rb.isKinematic = true; // 6 } The way this script attaches the object differs from the SimpleGrab script, as it doesn’t use a FixedJoint, but instead makes itself a child of the controller. This means the connection between the controller and snap objects can’t be broken by force. This will keep everything stable for this tutorial, but you might prefer to use a FixedJoint in your own projects. Taking it play-by-play: - Accept a controller as a parameter to connect to. - Set this object’s parent to be the controller. - Make this object’s rotation the same as the controller and add the offset. - Make this object’s position the same as the controller and add the offset. - Disable the gravity on this object; otherwise, it would fall out of your hand. - Make this object kinematic. While attached to the controller, this object won’t be under the influence of the physics engine. Now add the matching method to release the object by adding the following method: private void ReleaseFromController(RWVR_InteractionController controller) // 1 { cachedTransform.SetParent(null); // 2 rb.useGravity = true; // 3 rb.isKinematic = false; rb.velocity = controller.velocity; // 4 rb.angularVelocity = controller.angularVelocity; } This simply unparents the object, resets the rigidbody and applies the controller velocities. In more detail: - Accept the controller to release as a parameter. - Unparent the object. - Re-enable gravity and make the object non-kinematic again. - Apply the controller’s velocities to this object. Add the following override method to perform the snapping: public override void OnTriggerWasPressed(RWVR_InteractionController controller) // 1 { base.OnTriggerWasPressed(controller); // 2 if (hideControllerModel) // 3 { controller.HideControllerModel(); } ConnectToController(controller); // 4 } This one is fairly straightforward: - Override OnTriggerWasPressed()to add the snap code. - Call the base method. - If the hideControllerModelflag was set, hide the controller model. - Connect this object to the controller. Now add the release method below: public override void OnTriggerWasReleased(RWVR_InteractionController controller) // 1 { base.OnTriggerWasReleased(controller); // 2 if (hideControllerModel) // 3 { controller.ShowControllerModel(); } ReleaseFromController(controller); // 4 } Again, fairly simple: - Override OnTriggerWasReleased()to add the release code. - Call the base method. - If the hideControllerModelflag was set, show the controller model again. - Release this object to the controller. Save this script and return to the editor. Drag the RealArrow prefab out of the Prefabs folder into the Hierarchy window. Select the arrow and set its position to (X:0.5, Y:4.5, Z:-0.8). It should be floating above the stone slab now: Attach the RWVR_Snap To Controller component to the new arrow in the Hierarchy so you can interact with it and set its Hide Controller Model bool to true. Finally, press the Apply button at the top of the Inspector window to apply the changes to this prefab. For this object, there’s no need to change the offsets; it should snap to an acceptable position by default. Save the scene and run it. Grab the arrow and throw it away. Let your inner beast out! Notice that the arrow will always be positioned properly in your hand, no matter how you pick it up. You’re all done with this tutorial; play around with the game a bit to get a feel for the dynamics of the interactions. Where to Go From Here? You can download the finished project here. In this tutorial you’ve learned how to create an expandable interaction system, and you’ve discovered several ways to grab objects using the interaction system. In the next part of this tutorial, you’ll learn how to expand the system further by making a functional bow and arrow, and even creating a functional). Thanks for
https://www.raywenderlich.com/159552/advanced-vr-mechanics-unity-htc-vive-part-1
CC-MAIN-2017-26
en
refinedweb
I have some Tkinter code with a Label object. The user needs to be able to select text from the Label in order to copy the text (and paste into something else). However, it seems Labels aren't selectable? I can't figure out how to enable this... any suggestions? from Tkinter import * master = Tk() label = Label(master, text="Example: foo_bar(1,2,3)") label.pack() mainloop()
https://www.daniweb.com/programming/software-development/threads/316754/selectable-label-with-tkinter
CC-MAIN-2017-26
en
refinedweb
In this tutorial I am going to cover search suggestions. It’s the second and last part of a tutorial on Android search. In the first part I covered the basics of how to use the search framework of Android within your app. If you haven’t read it yet you might want to do so now. What are search suggestions When a user enters a search-query, Android starts to provide suggestions for possible search queries, based on what the user has entered so far. Given the shortcomings of onscreen-keyboards this eases searching a lot – which is why you should think about adding search suggestions to your app as well. Types of suggestions For search suggestions to work you need to add a content provider to your app and also to configure your app’s search meta data. Android supports two suggestion models: - Your suggestions are based on queries the user made in the past - Your suggestions are based on data of your app (e.g. a database) Recent Query Suggestions First I am going to show how to display recent queries as suggestions. This is really easy to do. Android provides the class SearchRecentSuggestionsProvider which offers a complete solution for recent suggestions. All you have to do, is to inherit from this class and to configure it within your constructor: import android.content.SearchRecentSuggestionsProvider; public class SampleRecentSuggestionsProvider extends SearchRecentSuggestionsProvider { public static final String AUTHORITY = SampleRecentSuggestionsProvider.class.getName(); public static final int MODE = DATABASE_MODE_QUERIES; public SampleRecentSuggestionsProvider() { setupSuggestions(AUTHORITY, MODE); } } Now your content provider is able to return all recent queries – or would be if it knew about those queries in the first place. What you have done so far is to configure your content provider to provide suggestions. But you also have to save the queries so that Android can display them when the user starts another search. This is done from within your search activity when it reacts to the query: SearchRecentSuggestions suggestions = new SearchRecentSuggestions(this, SampleRecentSuggestionsProvider.AUTHORITY, SampleRecentSuggestionsProvider.MODE); suggestions.saveRecentQuery(query, null); As you can see SearchRecentSuggestionsProvider’s saveRecentQuery method does that for you. Next you have to add your SearchRecentSuggestionsProvider subclass as a content provider within the AndroidManifest.xml file. This is not different from the configuration of any other content provider. <provider android:authorities="de.openminds.SampleRecentSuggestionsProvider" android: </provider> Finally you need to add two lines to your search configuration file to enable your recent suggestions provider: android:searchSuggestAuthority = "com.grokkingandroid.SampleRecentSuggestionsProvider" android:searchSuggestSelection = " ?" The database containing your app’s recent suggestions The SearchRecentSuggestionsProvider stores the search history in a database called suggestions.db within your app’s databases directory. This db stores an id, the query, a display text and the timestamp of the time the query was made. The timestamp is needed since suggestions are sorted by time so that the most recent queries show up first. You should also consider to clear the search history from within your app – maybe even offer the user the possibility to do so. Clearing the history is also easily done. You simply have to call the method clearHistory() of the SearchRecentSuggestions object that you used above to save the search terms. I you do not clear the history, Android will eventually do so itself. At most up to 250 search terms are kept in the database. App-specific suggestions For many apps you want to customize what suggestions to display. Most often it’s not interesting what the user has already searched for, but what you can actually provide. For this you need to implement a content provider accessing an app-specific data source. But your provider can have rudimentary implementations for many methods a normal content provider would have to implement in detail. For search to work only the query() and getContentType() methods are of interest. Android’s search framework calls your query() method to find out about the suggestions of your app. But to supply suggestions you first have to know what the user has typed into the search box so far. You get this information in one of two ways: - The URI contains the query as its last segment. This is the default - The selection parameter contains the query The query as part of the URI The URI Android uses for search is a bit weird at first glance. As usual it consists of the authority of your content provider plus an optional path element you can configure in the search configuration. But in addition to these two elements Android always adds another constant to the path to make it unique for search. And as the last element Android adds the query text – the text the user has entered so far. The constant added is search_suggest_query, defined as the final field SUGGEST_URI_PATH_QUERY within the class SearchManager. So the URI looks like this: content://authority/optionalPath/search_suggest_query/queryText The next code snippet shows what your code would look like if your content provider is not used for anything else than search: public Cursor query(Uri uri, String[] projection, String selection, String[] selectionArgs, String sortOrder) { String query = uri.getLastPathSegment(); if (SearchManager.SUGGEST_URI_PATH_QUERY.equals(query)) { // user hasn’t entered anything // thus return a default cursor } else { // query contains the users search // return a cursor with appropriate data } } The query as part of a configured where clause The other possibility to get the query is to define a where clause in your search configuration. You have to add the attribute android:searchSuggestSelection. For the sample app I use the following configuration: android:searchSuggestSelection="name like ?" The resulting code for a search only content provider looks pretty simple: public Cursor query(Uri uri, String[] projection, String selection, String[] selectionArgs, String sortOrder) { if (selectionArgs != null && selectionArgs.length > 0 && selectionArgs[0].length() > 0) { // the entered text can be found in selectionArgs[0] // return a cursor with appropriate data } else { // user hasn’t entered anything // thus return a default cursor } } In the end it is up to your preferences which of the two query styles you choose. I prefer configuring the where clause since it’s a bit easier to handle and still very flexible. Suggestions threshold By default Android queries your search suggestions provider after any change to the text of the search box. If your data source contains many data and you are unlikely to provide meaningful suggestions for few characters it might be best to show suggestions only when more characters have been typed. You can change the threshold value to achieve this. A value of “3” would mean that Android presents suggestions only when the user has entered at least three characters: android:searchSuggestThreshold="3" Preparing the suggestions list for later evaluation The cursor you return at the end of the query() method must follow strict conventions. The framework expects columns with specific names – some of which you can also configure alternatively in your configuration file. Android uses the returned values to display the suggestions and also to call your search activity with an intent object that contains all relevant information for you to react to a selected suggestion. Configuration options for suggestions clicks To create the intent for your search activity, Android needs to know which action to use and which URI to use as value for the intent-data. You can configure both in the search configuration file using the attributes android:searchSuggestIntentAction and android:searchSuggestIntentData respectively. android:searchSuggestIntentAction = "android.intent.action.VIEW" android:searchSuggestIntentData = "content://someAuthority/somePath" Returning a well-formed cursor for search suggestions All other values must be part of your cursor. As mentioned above, the columns of your cursor must adhere to a strict naming convention. All possible columns are defined within the class SearchManager. But the following three are those that you most likely need. Of these only the column Searchmanager.SUGGEST_COLUMN_TEXT_1 is mandatory. But normally you would need at least the data id as well – after all you need to know how to react to a selected suggestion. And for this you might want to have an id ready, so that you can query your data source. The problem with the strict naming conventions of course is, that your data source is unlikely to follow these conventions. Normally you would use names that represent some real life attributes. Like first and last names of customers, locations for events and so on. Thus you need a mapping. The method setProjectionMap() of the class SQLiteQueryBuilder helps with that. Alas, the usage of the projection map is not without some downsides. I will cover these and how to deal with them in a follow-up post. For now I ignore these problems and simply show you how to use the projection map. Map<String, String> projectionMap = new HashMap<String, String>(); projectionMap.put(COL_BAND, COL_BAND + " AS " + SearchManager.SUGGEST_COLUMN_TEXT_1); projectionMap.put(COL_ID, COL_ID); projectionMap.put(COL_ROW_ID, COL_ROW_ID + " AS " + SearchManager.SUGGEST_COLUMN_INTENT_DATA_ID); projectionMap.put(COL_LOCATION, COL_LOCATION + " AS " + SearchManager.SUGGEST_COLUMN_TEXT_2); projectionMap.put(COL_DATE, COL_DATE); builder.setProjectionMap(projectionMap); Reacting to a suggestion click When a user selects a search suggestions, Android’s search framework calls the search activity that you configured in your manifest file. It uses an explicit intent to do so. In case of an app-specific suggestions provider this intent contains data from your configuration file and your returned cursor. How to proceed depends on whether you use a recent suggestions or an app-specific suggestions provider. If you used a recent suggestions provider you have to start the search for this term again to present a list of results. The action for the intent is Intent.ACTION_SEARCH. In this case the search activity would look exactly like the one, you have seen in the first part of this tutorial. With an app-specific suggestions provider the suggestions list most often shows concrete data. If a user clicks on one of these suggestions, he does not want to see a list of results again – instead he wants to see a detail view for the item that he selected. You can configure which action to use, but most often it will be Intent.ACTION_VIEW. Since your search activity probably inherits from ListActivity it is not the appropriate activity for a detail view. In this case your search activity must read the id and other information contained within the intent and start a new activity for the detail view. That’s the reason your cursor needs the data id as described above. The next snippet shows how to start the details activity. private void handleIntent(Intent intent) { if (Intent.ACTION_SEARCH.equals(intent.getAction())) { String query = intent.getStringExtra(SearchManager.QUERY); doSearch(query); } else if (Intent.ACTION_VIEW.equals(intent.getAction())) { Uri detailUri = intent.getData(); String id = detailUri.getLastPathSegment(); Intent detailsIntent = new Intent(getApplicationContext(), DetailsActivity.class); detailsIntent.putExtra("ID", id); startActivity(detailsIntent); finish(); } } Global search Basically all you have to do is to add one line to your search configuration file: android:includeInGlobalSearch="true" Of course there is a catch. Your search provider only gets used when the user adds it to the list of searchable items for the Quick Search Box. Without this one-time user interaction your provider won’t be asked for suggestions. The problem is not so much with the code but with how you get your user to add your app to this list: Your app itself is not allowed to change this value – only the user can. But luckily Android provides you with an intent you can use to directly jump to the configuration page for globally searchable items. android.app.SearchManager.INTENT_ACTION_GLOBAL_SEARCH You should consider offering your user a way to trigger this intent from within your app. When the user has added your app to the list of searchable items, your provider’s results are shown within the search results. As you can see on the screenshot Android displays concerts of the sample app within the global suggestions list. The biggest problem with search suggestions Search suggestions have one big problem: You have no control over how they are going to be displayed. This can be a big problem if you have customized the appearance of your app a lot and applied a custom style. In this case the suggestions displayed by Android would most likely ruin the consistency within your app. I think there is a difference between switching to other apps for necessary tasks (like picking a contact or adding an event) or searching for content of your app. The latter should always fit your style. For a Holo-only app, the UI of the standard search suggestions poses no problem. For customized apps it often does. It gets worse if you want to present distinct data. Search in a concert app might yield results for locations as well as for bands. You might want to separate both – maybe even add section headers. You are out of luck with the standard suggestions. You need to roll out your own suggestions solution in this case. There is no reason for despair though. What you have learned in this tutorial is needed for global search as well. And for many apps, global search makes sense. Also most apps do not need section headers and might even stick more or less to the standard Android design. In this case you do not need to worry. Wrapping up As you can see, Android offers you a powerful search framework that you can tweak to your app’s need without too much hassle. You control whether your app’s content is searchable only locally or also globally. You can offer suggestions based on recent search queries or based on your apps data and tweak what to do once the user has selected a search suggestion. Of course I couldn’t cover everything in this tutorial. The biggest topic missing here is how to use search shortcuts. I will cover them in a later post. Stay tuned! 20 thoughts on “Android Tutorial: Adding Search Suggestions” Very good article, I wish I had this back in my early Android development days. 🙂 Nice article, having a working sample would be great! Thanks, Emanuele! You are right about a working example. I am working on a sample app that includes all tutorials of my blog and get’s enhanced as my blog progresses. But it’s not ready yet. I will let you know here in the comment section and also via an additional blog post when it’s ready. Thank you, this tutorials are really helping a lot. I’m a iOs developer moving now to Android, currently I’m looking for a working sample of a search suggestion using my APIs (). I still haven’t found any so far. Very good article(s)! Android SDK’s sample “SearchableDictionary” implements much of the functionality discussed – but not for global search & not using recent search queries. The question I’m trying to answer is if each suggestion can contain more than 2 fields? It seems SearchManager.SUGGEST_COLUMN_TEXT_1 and .SUGGEST_COLUMN_TEXT_2 are the only things that can be displayed in the suggestion, but my understanding is only half-baked. Scott, you have not much choice here. That’s okay, because your result ends up in the global search result list. Some limitations have to apply here to make this usable. You can use two lines of text or one line of text and a second line as link (from API level 8 onwards). And you can use two icons. That’s all there is. But I think that should suffice. Just stick to the most important part of the search result and present this to the user. Thanks for the confirmation! I’ll figure out a way to make it work. 🙂 I would like suggestions on the search view, but from the remote REST service, just like Google Play. Every other example I have seen on the web is using Cursor, ContentProvider, and SQLite DB for the suggestions. I can’t find an example of suggestions from remote service. Maybe I need to cache the results from the service in the DB and then use the standard approach for suggestions. I don’t know which approach to take, and I’m all confused. What are your insights on this one Interesting idea for a blog post. But nothing to answer in detail within a comment. As a short reply: What Google probably does, is to contact its service asynchronously as soon as you start typing. It stores the results when they come back into the content provider – which automatically triggers a change in the dropdown box (I’ve written about that in the Loaders, ContentObserver and ContentProvider posts). I think, I will do a post about it, but don’t expect it too soon. I’ve just started another series about a completely different topic today. Here is a Gist I’ve found that you can use as a starting point . But I would like to see your approach and an implementation using Loaders. “At most up to 250 search terms are kept in the database.” Can’t i extend the search terms? Or i have to link to my app web-service database..for more terms?? Thanks for this nice very well explaned tutorial about search suggestion. I have acheived to show custom search suggestions following your tutorial and official Android tutorial. —-BUT—- I have multiple activities which have android.widget.SearchView in their action bar. Now i want that each SearchView should display different suggestions based on different Database Tables. I have also posted this question on StackOverFlow if you want further detail on my problem you can read at I’ve just added an answer to your SO question. Basically you have to use a second searchable xml file and – of course – have to do a bit more in your content provider to support multiple tables. I got unknown uri error in recent query suggestions plaease help me I write the code as it is you show here. I also saw android developer website for this piece of code.I try everything but can’t get rid of it That’s a bit vague. Please post your question on SO and add all the relevant information to it. in your case these are at least: Your search xml file, a logcat containing the error and any stacktrace related to it, your query()method and the manifest file’s entries for search and for the content provider. hi, Thank you for your post. I have spent a lot to understand the action bar search widget functionality. you have simplified it. thanks Tapas Now what is COL_BAND etc.? This useless overabstraction will be the dead of Java. Hey . If I want to make a UI like Zomato’s Location Search Activity. i.e., the suggestions come in a well defined listview beneath which can have custom views for different positions. HELP ANYONE! Hi Wolfram, Are you planning to put up a working sample? Perhaps a sample app which you’ve talked about earlier? Definitely not any time soon. Sorry!
http://www.grokkingandroid.com/android-tutorial-adding-suggestions-to-search/
CC-MAIN-2017-26
en
refinedweb
I'm writting a program that will take a file as input and go through the list of numbers in the file and return the max number uing recusions. I'm not very good with recursions here is what I have so far. Also I'm not allowed to use the max command. def Mymax(lists): if (len(lists[1:])>0): return def main(): fname = input("Enter filename: ") infile = open(fname, "r") data = infile.read() for line in data.splitlines(): print(eval(line)) main()
https://www.daniweb.com/programming/software-development/threads/442154/max-num-in-list-file
CC-MAIN-2017-26
en
refinedweb
Joining is easy. Subscribe to our announce list. We'll send you a confirmation email in reply. Come back, enter the confirmation password, and you're done! Back in 2012 I wrote a blog post on using Tor on Android which has proved quite popular over the years. These days, there is the OrFox browser, which is from The Tor Project and is likely the current best way to browse the web through Tor on your Android device. If you’re still using the custom setup Firefox, I’d recommend giving OrFox a try – it’s been working quite well for me. (This was published in an internal technical journal last week, and is now being published here. If you already know what Docker is, feel free to skim the first half.) Docker seems to be the flavour of the month in IT. Most attention is focussed on using Docker for the deployment of production services. But that’s not all Docker is good for. Let’s explore Docker, and two ways I use it as a software developer.Docker: what is it? Docker is essentially a set of tools to deal with containers and images. To make up an artificial example, say you are developing a web app. You first build an image: a file system which contains the app, and some associated metadata. The app has to run on something, so you also install things like Python or Ruby and all the necessary libraries, usually by installing a minimal Ubuntu and any necessary packages.1 You then run the image inside an isolated environment called a container. You can have multiple containers running the same image, (for example, your web app running across a fleet of servers) and the containers don’t affect each other. Why? Because Docker is designed around the concept of immutability. Containers can write to the image they are running, but the changes are specific to that container, and aren’t preserved beyond the life of the container.2 Indeed, once built, images can’t be changed at all, only rebuilt from scratch. However, as well as enabling you to easily run multiple copies, another upshot of immutability is that if your web app allows you to upload photos, and you restart the container, your photos will be gone. Your web app needs to be designed to store all of the data outside of the container, sending it to a dedicated database or object store of some sort. Making your application Docker friendly is significantly more work than just spinning up a virtual machine and installing stuff. So what does all this extra work get you? Three main things: isolation, control and, as mentioned, immutability. Isolation makes containers easy to migrate and deploy, and easy to update. Once an image is built, it can be copied to another system and launched. Isolation also makes it easy to update software your app depends on: you rebuild the image with software updates, and then just deploy it. You don’t have to worry about service A relying on version X of a library while service B depends on version Y; it’s all self contained. Immutability also helps with upgrades, especially when deploying them across multiple servers. Normally, you would upgrade your app on each server, and have to make sure that every server gets all the same sets of updates. With Docker, you don’t upgrade a running container. Instead, you rebuild your Docker image and re-deploy it, and you then know that the same version of everything is running everywhere. This immutability also guards against the situation where you have a number of different servers that are all special snowflakes with their own little tweaks, and you end up with a fractal of complexity. Finally, Docker offers a lot of control over containers, and for a low performance penalty. Docker containers can have their CPU, memory and network controlled easily, without the overhead of a full virtual machine. This makes it an attractive solution for running untrusted executables.3 As an aside: despite the hype, very little of this is actually particularly new. Isolation and control are not new problems. All Unixes, including Linux, support ‘chroots’. The name comes from “change root”: the system call changes the processes idea of what the file system root is, making it impossible for it to access things outside of the new designated root directory. FreeBSD has jails, which are more powerful, Solaris has Zones, and AIX has WPARs. Chroots are fast and low overhead. However, they offer much lower ability to control the use of system resources. At the other end of the scale, virtual machines (which have been around since ancient IBM mainframes) offer isolation much better than Docker, but with a greater performance hit. Similarly, immutability isn’t really new: Heroku and AWS Spot Instances are both built around the model that you get resources in a known, consistent state when you start, but in both cases your changes won’t persist. In the development world, modern CI systems like Travis CI also have this immutable or disposable model – and this was originally built on VMs. Indeed, with a little bit of extra work, both chroots and VMs can give the same immutability properties that Docker gives. The control properties that Docker provides are largely as a result of leveraging some Linux kernel concepts, most notably something called namespaces. What Docker does well is not something novel, but the engineering feat of bringing together fine-grained control, isolation and immutability, and – importantly – a tool-chain that is easier to use than any of the alternatives. Docker’s tool-chain eases a lot of pain points with regards to building containers: it’s vastly simpler than chroots, and easier to customise than most VM setups. Docker also has a number of engineering tricks to reduce the disk space overhead of isolation. So, to summarise: Docker provides a toolkit for isolated, immutable, finely controlled containers to run executables and services.Docker in development: why? I don’t run network services at work; I do performance work. So how do I use Docker? There are two things I do with Docker: I build PHP 5, and do performance regression testing on PHP 7. They’re good case studies of how isolation and immutability provide real benefits in development and testing, and how the Docker tool chain makes life a lot nicer that previous solutions.PHP 5 builds I use the isolation that Docker provides to make building PHP 5 easier. PHP 5 depends on an old version of Bison, version 2. Ubuntu and Debian long since moved to version 3. There are a few ways I could have solved this: Docker makes it easy to have a self-contained environment that has Bison 2 built from source, and to build my latest source tree in that environment. Why is Docker so much easier? Firstly, Docker allows me to base my container on an existing container, and there’s an online library of containers to build from.4 This means I don’t have to roll a base image with debootstrap or the RHEL/CentOS/Fedora equivalent. Secondly, unlike a chroot build process, which ultimately is just copying files around, a docker build process includes the ability to both copy files from the host and run commands in the context of the image. This is defined in a file called a Dockerfile, and is kicked off by a single command: docker build. So, my PHP 5 build container loads an Ubuntu Vivid base container, uses apt-get to install the compiler, tool-chain and headers required to build PHP 5, then installs old bison from source, copies in the PHP source tree, and builds it. The vast majority of this process – the installation of the compiler, headers and bison, can be cached, so they don’t have to be downloaded each time. And once the container finishes building, I have a fully built PHP interpreter ready for me to interact with. I do, at the moment, rebuild PHP 5 from scratch each time. This is a bit sub-optimal from a performance point of view. I could alleviate this with a Docker volume, which is a way of sharing data persistently between a host and a guest, but I haven’t been sufficiently bothered by the speed yet. However, Docker volumes are also quite fiddly, leading to the development of tools like docker compose to deal with them. They also are prone to subtle and difficult to debug permission issues.PHP 7 performance regression testing The second thing I use docker for takes advantage of the throwaway nature of docker environments to prevent cross-contamination. PHP 7 is the next big version of PHP, slated to be released quite soon. I care about how that runs on POWER, and I preferably want to know if it suddenly deteriorates (or improves!). I use Docker to build a container with a daily build of PHP 7, and then I run a benchmark in it. This doesn’t give me a particularly meaningful absolute number, but it allows me to track progress over time. Building it inside of Docker means that I can be sure that nothing from old runs persists into new runs, thus giving me more reliable data. However, because I do want the timing data I collect to persist, I send it out of the container over the network. I’ve now been collecting this data for almost 4 months, and it’s plotted below, along with a 5-point moving average. The most notable feature of the graph is a the drop in benchmark time at about the middle. Sure enough, if you look at the PHP repository, you will see that a set of changes to improve PHP performance were merged on July 29: changes submitted by our very own Anton Blanchard.5Docker pain points Docker provides a vastly improved experience over previous solutions, but there are still a few pain points. For example: Docker was apparently written by people who had no concept that platforms other than x86 exist. This leads to major issues for cross-architectural setups. For instance, Docker identifies images by a name and a revision. For example, ubuntu is the name of an image, and 15.04 is a revision. There’s no ability to specify an architecture. So, how you do specify that you want, say, a 64-bit, little-endian PowerPC build of an image versus an x86 build? There have been a couple of approaches, both of which are pretty bad. You could name the image differently: say ubuntu_ppc64le. You can also just cheat and override the ubuntu name with an architecture specific version. Both of these break some assumptions in the Docker ecosystem and are a pain to work with. Image building is incredibly inflexible. If you have one system that requires a proxy, and one that does not, you need different Dockerfiles. As far as I can tell, there are no simple ways to hook in any changes between systems into a generic Dockerfile. This is largely by design, but it’s still really annoying when you have one system behind a firewall and one system out on the public cloud (as I do in the PHP 7 setup). Visibility into a Docker server is poor. You end up with lots of different, anonymous images and dead containers, and you end up needing scripts to clean them up. It’s not clear what Docker puts on your file system, or where, or how to interact with it. Docker is still using reasonably new technologies. This leads to occasional weird, obscure and difficult to debug issues.6 Docker provides me with a lot of useful tools in software development: both in terms of building and testing. Making use of it requires a certain amount of careful design thought, but when applied thoughtfully it can make life significantly easier. There’s some debate about how much stuff from the OS installation you should be using. You need to have key dynamic libraries available, but I would argue that you shouldn’t be running long running processes other than your application. You shouldn’t, for example, be running a SSH daemon in your container. (The one exception is that you must handle orphaned child processes appropriately: see) Considerations like debugging and monitoring the health of docker containers mean that this point of view is not universally shared.? Why not simply make them read only? You may be surprised at how many things break when running on a read-only file system. Things like logs and temporary files are common issues.? It is, however, easier to escape a Docker container than a VM. In Docker, an untrusted executable only needs a kernel exploit to get to root on the host, whereas in a VM you need a guest-to-host vulnerability, which are much rarer.? Anyone can upload an image, so this does require running untrusted code from the Internet. Sadly, this is a distinctly retrograde step when compared to the process of installing binary packages in distros, which are all signed by a distro’s private key.? See? I hit this last week:, although maybe that’s my fault for running systemd on my laptop.? So today I saw Freestanding “Hello World” for OpenPower on Hacker News. Sadly Andrei hadn’t been able to test it on real hardware, so I set out to get it running on a real OpenPOWER box. Here’s what I did. Firstly, clone the repo, and, as mentioned in the README, comment out mambo_write. Build it. Grab op-build, and build a Habanero defconfig. To save yourself a fair bit of time, first edit openpower/configs/habanero_defconfig to answer n about a custom kernel source. That’ll save you hours of waiting for git. This will build you a PNOR that will boot a linux kernel with Petitboot. This is almost what you want: you need Skiboot, Hostboot and a bunch of the POWER specific bits and bobs, but you don’t actually want the Linux boot kernel. Then, based on op-build/openpower/package/openpower-pnor/openpower-pnor.mk, we look through the output of op-build for a create_pnor_image.pl command, something like this monstrosity: PATH="/scratch/dja/public/op-build/output/host/bin:/scratch/dja/public/op-build/output/host/sbin:/scratch/dja/public/op-build/output/host/usr/bin:/scratch/dja/public/op-build/output/host/usr/sbin:/home/dja/bin:/home/dja/bin:/home/dja/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/opt/openpower/common/x86_64/bin" /scratch/dja/public/op-build/output/build/openpower-pnor-ed1682e10526ebd85825427fbf397361bb0e34aa/create_pnor_image.pl -xml_layout_file /scratch/dja/public/op-build/output/build/openpower-pnor-ed1682e10526ebd85825427fbf397361bb0e34aa/"defaultPnorLayoutWithGoldenSide.xml" -pnor_filename /scratch/dja/public/op-build/output/host/usr/powerpc64-buildroot-linux-gnu/sysroot/pnor/"habanero.pnor" -hb_image_dir /scratch/dja/public/op-build/output/host/usr/powerpc64-buildroot-linux-gnu/sysroot/hostboot_build_images/ -scratch_dir /scratch/dja/public/op-build/output/host/usr/powerpc64-buildroot-linux-gnu/sysroot/openpower_pnor_scratch/ -outdir /scratch/dja/public/op-build/output/host/usr/powerpc64-buildroot-linux-gnu/sysroot/pnor/ -payload /scratch/dja/public/op-build/output/images/"skiboot.lid" -bootkernel /scratch/dja/public/op-build/output/images/zImage.epapr -sbe_binary_filename "venice_sbe.img.ecc" -sbec_binary_filename "centaur_sbec_pad.img.ecc" -wink_binary_filename "p8.ref_image.hdr.bin.ecc" -occ_binary_filename /scratch/dja/public/op-build/output/host/usr/powerpc64-buildroot-linux-gnu/sysroot/occ/"occ.bin" -targeting_binary_filename "HABANERO_HB.targeting.bin.ecc" -openpower_version_filename /scratch/dja/public/op-build/output/host/usr/powerpc64-buildroot-linux-gnu/sysroot/openpower_version/openpower-pnor.version.txt Replace the -bootkernel arguement with the path to ppc64le_hello, e.g.: -bootkernel /scratch/dja/public/ppc64le_hello/ppc64le_hello Don’t forget to move it into place!1 mv output/host/usr/powerpc64-buildroot-linux-gnu/sysroot/pnor/habanero.pnor output/images/habanero.pnor Then we can use skiboot’s boot test script (written by Cyril and me, coincidentally!) to flash it.1 ppc64le_hello/skiboot/external/boot-tests/boot_test.sh -vp -t hab2-bmc -P <path to>/habanero.pnor It’s not going to get into Petitboot, so just interrupt it after it powers up the box and connect with IPMI. It boots, kinda:1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 [11012941323,5] INIT: Starting kernel at 0x20010000, fdt at 0x3044db68 (size 0x11cc3) CPU0 not found? Pick your 1.42486|ERRL|Dumping errors reported prior to registration Yes, it does wrap horribly. However, the big issue here (which you’ll have to scroll to see!) is the “CPU0 not found?”. Fortunately, we can fix this with a little patch to cpu_init in main.c to test for a PowerPC POWER8:1 2 3 4 5 6 7 8 cpu0_node = fdt_path_offset(fdt, "/cpus/cpu@0"); if (cpu0_node < 0) { cpu0_node = fdt_path_offset(fdt, "/cpus/PowerPC,POWER8@20"); } if (cpu0_node < 0) { printk("CPU0 not found?\n"); return; } This is definitely the wrong way to do this, but it works for now. Now, correcting for weird wrapping, we get:1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 Assuming default SLB size SLB size = 0x20 TB freq = 512000000 [13205442015,3] OPAL: Trying a CPU re-init with flags: 0x2 Unrecoverable exception stack top @ 0x20019EC8 HTAB (2048 ptegs, mask 0x7FF, size 0x40000) @ 0x20040000 SLB entries: 1: E 0x8000000 V 0x4000000000000400 EA 0x20040000 -> hash 0x20040 -> pteg 0x200 = RA 0x20040000 EA 0x20041000 -> hash 0x20041 -> pteg 0x208 = RA 0x20041000 EA 0x20042000 -> hash 0x20042 -> pteg 0x210 = RA 0x20042000 EA 0x20043000 -> hash 0x20043 -> pteg 0x218 = RA 0x20043000 EA 0x20044000 -> hash 0x20044 -> pteg 0x220 = RA 0x20044000 EA 0x20045000 -> hash 0x20045 -> pteg 0x228 = RA 0x20045000 EA 0x20046000 -> hash 0x20046 -> pteg 0x230 = RA 0x20046000 EA 0x20047000 -> hash 0x20047 -> pteg 0x238 = RA 0x20047000 EA 0x20048000 -> hash 0x20048 -> pteg 0x240 = RA 0x20048000 ... The weird wrapping seems to be caused by NULLs getting printed to OPAL, but I haven’t traced what causes that. Anyway, now it largely works! Here’s a transcript of some things it can do on real e> Testing exception handling... sc(feed) => 0xFEEDFACE t> EA 0xFFFFFFF000 -> hash 0xFFFFFFF -> pteg 0x3FF8 = RA 0x20010000 mapped 0xFFFFFFF000 to 0x20010000 correctly EA 0xFFFFFFF000 -> hash 0xFFFFFFF -> pteg 0x3FF8 = unmap EA 0xFFFFFFF000 -> hash 0xFFFFFFF -> pteg 0x3FF8 = RA 0x20011000 mapped 0xFFFFFFF000 to 0x20011000 incorrectly EA 0xFFFFFFF000 -> hash 0xFFFFFFF -> pteg 0x3FF8 = un u> EA 0xFFFFFFF000 -> hash 0xFFFFFFF -> pteg 0x3FF8 = RA 0x20080000 returning to user code returning to kernel code EA 0xFFFFFFF000 -> hash 0xFFFFFFF -> pteg 0x3FF8 = unmap I also tested the other functions and they all seem to work. Running non-priviledged code with the MMU on works. Dumping the FDT and the 5s delay both worked, although they tend to stress IPMI a lot. The delay seems to correspond well with real time as well. It does tend to error out and reboot quite often, usually on the menu screen, for reasons that are not clear to me. It usually starts with something entirely uninformative from Hostboot, like this:1 2 1.41801|ERRL|Dumping errors reported prior to registration 2.89873|Ignoring boot flags, incorrect version 0x0 That may be easy to fix, but again I haven’t had time to trace it. All in all, it’s very exciting to see something come out of the simulator and in to real hardware. Hopefully with the proliferation of OpenPOWER hardware, prices will fall and these sorts of systems will become increasingly accessible to people with cool low level projects like this! The way autoboot behaves in Petitboot has undergone some significant changes recently, so in order to ward off any angry emails lets take a quick tour of how the new system works.Old & Busted For some context, here is the old (or current depending on what you’re running) section of the configuration screen. This gives you three main options: don’t autoboot, autoboot from anything, or autoboot only from a specific device. For the majority of installations this is fine, such as when you have only one default option, or know exactly which device you’ll be booting from. A side note about default options: it is important to note that not all boot options are valid autoboot options. A boot option is only considered for auto-booting if it is marked default, eg. ‘set default’ in GRUB and ‘default’ in PXE options.New Hotness Below is the new autoboot configuration. The new design allows you to specify an ordered list of autoboot options. The last two of the three buttons are self explanatory - clear the list and autoboot any device, or clear the list completely (no autoboot). Selecting the first button, ‘Add Device’ brings up the following screen: From here you can select any device or class of device to add to the boot order. Once added to the boot order, the order of boot options can be changed with the left and right arrow keys, and removed from the list with the minus key (‘-’). This allows you to create additional autoboot configurations such as “Try to boot from sda2, otherwise boot from the network”, or “Give priority to PXE options from eth0, otherwise try any other netboot option”. You can retain the original behaviour by only putting one option into the list (either ‘Any Device’ or a specific device). Presently you can add any option into the list and order them how you like - which means you can do silly things like this:IPMI Slightly prior to the boot order changes Petitboot also received an update to its IPMI handling. IPMI ‘bootdev’ commands allow you to override the current autoboot configuration remotely, either by specifying a device type to boot (eg. PXE), or by forcing Petitboot to boot into the ‘setup’ or ‘safe’ modes. IPMI overrides are either persistent or non-persistent. A non-persistent override will disappear after a successful boot - that is, a successful boot of a boot option, not booting to Petitboot itself - whereas a persistent override will, well, persist! If there is an IPMI override currently active, it will appear in the configuration screen with an option to manually clear it: That sums up the recent changes to autoboot; a bit more flexibility in assigning priority, and options for more detailed autoboot order if you need it. New versions of Petitboot are backwards compatible and will recognise older saved settings, so updating your firmware won’t cause your machines to start booting things at random. (I wrote this blog post a couple of months ago, but it’s still quite relevant.) Hi, I’m Daniel! I work in OzLabs, part of IBM’s Australian Development Labs. Recently, I’ve been assigned to the CAPI project, and I’ve been given the opportunity to give you an idea of what this is, and what I’ll be up to in the future!What even is CAPI? To help you understand CAPI, think back to the time before computers. We had a variety of machines: machines to build things, to check things, to count things, but they were all specialised — good at one and only one thing. Specialised machines, while great at their intended task, are really expensive to develop. Not only that, it’s often impossible to change how they operate, even in very small ways. Computer processors, on the other hand, are generalists. They are cheap. They can do a lot of things. If you can break a task down into simple steps, it’s easy to get them to do it. The trade-off is that computer processors are incredibly inefficient at everything. Now imagine, if you will, that a specialised machine is a highly trained and experienced professional, a computer processor is a hungover university student. Over the years, we’ve tried lots of things to make student faster. Firstly, we gave the student lots of caffeine to make them go as fast as they can. That worked for a while, but you can only give someone so much caffeine before they become unreliable. Then we tried teaming the student up with another student, so they can do two things at once. That worked, so we added more and more students. Unfortunately, lots of tasks can only be done by one person at a time, and team-work is complicated to co-ordinate. We’ve also recently noticed that some tasks come up often, so we’ve given them some tools for those specific tasks. Sadly, the tools are only useful for those specific situations. Sometimes, what you really need is a professional. However, there are a few difficulties in getting a professional to work with uni students. They don’t speak the same way; they don’t think the same way, and they don’t work the same way. You need to teach the uni students how to work with the professional, and vice versa. Previously, developing this interface – this connection between a generalist processor and a specialist machine – has been particularly difficult. The interface between processors and these specialised machines – known as accelerators – has also tended to suffer from bottlenecks and inefficiencies. This is the problem CAPI solves. CAPI provides a simpler and more optimised way to interface specialised hardware accelerators with IBM’s most recent line of processors, POWER8. It’s a common ‘language’ that the processor and the accelerator talk, that makes it much easier to build the hardware side and easier to program the software side. In our Canberra lab, we’re working primarily on the operating system side of this. We are working with some external companies who are building CAPI devices and the optimised software products which use them. From a technical point of view, CAPI provides coherent access to system memory and processor caches, eliminating a major bottleneck in using external devices as accelerators. This is illustrated really well by the following graphic from an IBM promotional video. In the non-CAPI case, you can see there’s a lot of data (the little boxes) stalled in the PCIe subsystem, whereas with CAPI, the accelerator has direct access to the memory subsystem, which makes everything go faster.Uses of CAPI CAPI technology is already powering a few really cool products. Firstly, we have an implementation of Redis that sits on top of flash storage connected over CAPI. Or, to take out the buzzwords, CAPI lets us do really, really fast NoSQL databases. There’s a video online giving more details. Secondly, our partner Mellanox is using CAPI to make network cards that run at speeds of up to 100Gb/s. CAPI is also part of IBM’s OpenPOWER initiative, where we’re trying to grow a community of companies around our POWER system designs. So in many ways, CAPI is both a really cool technology, and a brand new ecosystem that we’re growing here in the Canberra labs. It’s very cool to be a part of! I wrote this blog post late last year, it is very relevant for this blog though so I’ll repost it here. With the launch of TYAN’s OpenPOWER reference system now is a good time to reflect on the team responsible for so much of the research, design and development behind this very first ground breaking step of OpenPOWER with their start to finish involvement of this new Power platform. ADL Canberra have been integral to the success of this launch providing the Open Power Abstraction Layer (OPAL) firmware. OPAL breathes new life into Linux on Power finally allowing Linux to run on directly on the hardware. While OPAL harnesses the hardware, ADL Canberra significantly improved Linux to sit on top and take direct control of IBMs new Power8 processor without needing to negotiate with a hypervisor. With all the Linux expertise present at ADL Canberra it’s no wonder that a Linux based bootloader was developed to make this system work. Petitboot leverage’s all the resources of the Linux kernel to create a light, fast and yet extremely versatile bootloader. Petitboot provides a massive amount of tools for debugging and system configuration without the need to load an operating system. TYAN have developed great and highly customisable hardware. ADL Canberra have been there since day 1 performing vital platform enablement (bringup) of this new hardware. ADL Canberra have put all the work into the entire software stack, low level work to get OPAL and Linux to talk to the new BMC chip as well as the higher level, enabling to run Linux in either endian and Linux is even now capable of virtualising KVM guests in either endian irrespective of host endian. Furthermore a subset of ADL Canberra have been key to getting the Coherent Accelerator Processor Interface (CAPI) off the ground, enabling more almost endless customisation and greater diversity within the OpenPOWER ecosystem. ADL Canberra is the home for Linux on Power and the beginning of the OpenPOWER hardware sees much of the hard work by ADL Canberra come to fruition. Let’s say your child is currently a classic consumer – they love watching TV, reading books, but they don’t really enjoy making things themselves. Or maybe they are making some things but it’s not really technological. We think any kind of making is awesome, but one of our favourite kinds is the kind where kids realize that they can build and influence the world around them. There’s an awesome Steve Jobs quote that I love, which says: .” Imagine if you can figure this out as a child.Ep. Recently, there’s been (actually two) ports of TianoCore (the reference implementation of UEFI firmware) to run on POWER on top of OPAL (provided by skiboot) – and it can be run in the Qemu PowerNV model. More details:!?s and 0?s from the input audio. The lower section deals with framing and plucking out valid telemetry packets: A couple of interesting features:.
https://luv.asn.au/aggregator?page=5
CC-MAIN-2017-26
en
refinedweb
New Linux Kernel Crash-Exploit discovered 691 Ant writes " According to linuxreviews article's on 6/11/2004, there is a nasty bug that lets a simple C program crash the kernel (2.4.18-2.6.x reported so far), effectively locking the whole system. Affects both 2.4.2x and 2.6.x kernels on the x86 architecture. This exploit can be compiled and run without a root access and with a shell access. There are detailed information and source code mentioned. " You need to have shell access to run this program; it's also worth noting that not *all* flavors are vulnerable. Please read article for the full details. There's a big difference... (Score:5, Insightful) There are goods and bads, however, the information is readily available. There are patches that "work", even before a full explanation is available. Now, thousands of people are actively working on a solution, if they so choose. If they don't choose, they can use the proprietary code method - wait for the official vendors to release a patch. In proprietary land, a vendor would first sue the person who released the information. Then, the re-iteration that you won't be vulnerable if you use a "properly configured firewall," then they'd start working on a fix. Re:There's a big difference... (Score:4, Interesting) Forget the firewall, get a properly implemented system. Re:There's a big difference... (Score:5, Funny) I've come up with the final word in firewall technology. What I do is connect my PC to the DSL router with a 10' ethernet cable. Then, using an approved tool, I carefully cut the cable, making sure to sever it completely. Haven't had a problem since. What we really need is an article suggesting how I can speed up my downloads... Re:There's a big difference... (Score:5, Funny) Re:There's a big difference... (Score:5, Funny) This is a common mistake that many first-time security administrators make. You're supposed to cut the cable before making the PC/router connection -- always implement your security protocol before connecting equipment to the outside world. Your downloads are probably slow because your machine was compromised during the time when your security was down - i.e., the interval between connecting the unsecured cable and the time you properly locked the connection down. Slow downloads are a key sign of a compromised system. Once you suspect your machine's been compromised, there's really no safe solution other than reinstalling everything from scratch. I'd also suggest discarding the cable and getting a new one - since you didn't secure the cable first, there may be an RF resonance bug lurking on the PC half of the cable, waiting to reinfect your machine when you hook it back up. You're obviously new to this, so just in case you haven't heard about them - RF resonance bugs use the reflection characteristics of an Ethernet cable to create a self-reinforcing standing-wave signal containing a copy of the virus. Older versions of these bugs could be dealt with simply by putting the cable in a Faraday cage and grounding the cable. But several of the more current RF resonator bugs contain quantum-mechanical sideband waveforms - put one of those in a Faraday cage and the q-m sidebands can refractively propogate into the cage itself, and you'll spend the rest of the day chasing down heisenbugs. Anyways, don't feel bad about this - it's a common enough mistake when you're just getting started with security. And by posting on /. you may have helped several other novices avoid making the same mistake. Re:There's a big difference... (Score:3, Interesting) Open a command window (start->run->"cmd") Ping any host (for example a host on your lan) Now press F7 and enter a couple of times. The machine reboots This works on almost every W2K machine I've tried on, regardless of SP level. In general, local exploits like these aren't taken seriously at all on Windows. Basically, if you've got full access to the machine all bets are off, there's just so many ways to bluesceen the machine intentionally, many includin Re:There's a big difference... (Score:5, Informative) I'm far from a Windows fanboy. I use Linux almost all the time... I just happened to have a Windows box on my network atm. Re:There's a big difference... (Score:3, Informative) Oh, and saying that local exploits aren't taken seriously is both a major understatement, and a not-so-major problem. After all, you can fix all the Denial-of-Service exploits you want, but if someone has local access to the machine, they can always pull out the power cord. That is not easily fixed with an OS patch. Never underestimate the use of a heavy door and good locks. Re:There's a big difference... (Score:5, Insightful) Or "Windows users tend not to care?" Incidentally currently I'm a (primarily) Windows user and I do patch (actually it's "install updates") when Windows tells me they're ready (if I estimate I need the particular update). Claiming that Windows users "don't care" just because they're Windows users is incorrect, to say the least. How can people mod that as insightful? Generalization like that should be discouraged as it is not constructive, but some actually reward it... Quite puzzling to me.. Re:There's a big difference... (Score:5, Insightful) Re:There's a big difference... (Score:4, Insightful) The vast majority of Windows users behave exactly as the grandparent post states. I know this because I deal with the results every day in my shop. I'd guess that 80% of the machines I see are in due to spyware and virus problems that could have been fixed with a patch available weeks earlier. More often than not, when I get these systems up and running, the first thing that happens is "*pop* Windows has downloaded updates and is now ready to install them." So the updates were already downloaded, waiting for the user to click "Install"... but the user never did, for reasons already mentioned. Automatic patching on XP Home would be doing end-users (and the internet!) a huge favor. Re:There's a big difference... (Score:3, Funny) Damned if you do... Re:There's a big difference... (Score:3, Insightful) Yup, and you know why? Because Microsoft tends to introduce arbitrary EULA or functionality changes in their patches. So with an autopatching system, you'd be agreeing to these changes implicitly. Whoops. Re:There's a big difference... (Score:3, Interesting) The reduction in spam and viruses alone would be worth the effort. Re:There's a big difference... (Score:5, Interesting) My Dad is a perfect case-in-point. He's an upper-level manager of a company. He was telling me about a piece of software he was planning on purchasing. I asked him about security. His answer was, simply, that the salesperson said it was secure. There's two things wrong with this: 1) He took the salesperson's word. In previous generations, people's words meant something. Trying to train them to think skeptically is difficult. In addition, by what yardstick would he, a non-technical manager, measure security? What's worse is that I've met his IT staff, and I wouldn't trust them to measure security, either. 2) He thinks that security is a yes/no option. Security is nothing like that. If someone were to be honest with him, and tell him that nothing is truely secure and it's all trade-offs, and then explain the trade-offs of their particular product, I'm sure he would have thought they were weaseling, when in fact they were telling the truth. Re:There's a big difference... (Score:5, Insightful) AMEN! It's a problem that I run into quite often and not just with security. When you come to understand a topic intimately enough you learn that there is very little in the world that's a yes/no option. Everything requires a level of expertise and must be tailored to the specific task at hand. The issue is that the people requesting the services don't know, don't have time to learn, and don't want to learn. They want the yes/no answer to keep their life easy. If you're the person attempting to sell your services in order to keep food on the plate, however, you're faced with a dilemma: Say "yes" and possibly get mired in a situation which is impossible (secure a network full of users who are actively trying to break the network), or say "no" and don't get the job. Re:There's a big difference... (Score:3, Insightful) Let's be real. He has good reason to trust the company about security information, and they have good reason to present accurate information. If the software fails and he gets hacked, they company loses business at best, gets bad publicity and a nasty lawsuit at worst. You act like people wanting easy solutions is a negative thing. Not everyone is a security expert. That's why w Re:There's a big difference... (Score:3, Insightful) It's not negative. It's the hubris that assumes that there _must_ be an easy solution, and whoever presents a solution and calls it "easy" must have found the right answer. "Not everyone is a security expert." I'm not saying they are. The point is that they assume that people who tell them what they want to hear _are_ security experts. "The less time we spend worrying about things we don't care about, the more time we can spend on things we Re:There's a big difference... (Score:3, Insightful) Re:There's a big difference... (Score:3, Insightful) What makes you think that the majority of Windows users take their computers to shops for software problems? In my experience, the only people who do that are the ones too technically incompetent to solve the problem and too socially incompetent to find a techie friend to help them. Re:There's a big difference... (Score:3, Insightful) This is puzzling to you? Hmm, I am more puzzled by the fact that entire COMPANIES went down when some of the worms started spreading because of unpatched systems that should have been patched MONTHS (almost a year IIRC) before. Now, i Re:There's a big difference... (Score:5, Insightful) dont play that game... 3 months before the big nasty worm that hit I was threatened with being fired because I patched all my systems with thew RPC hole patch... Not by my supervisor but by a bunch of jerks in corperate IT... after it hit and we were immune to the problems, did I hear an "I'm sorry?" or anything else? nope.. my boss bought me lunch that entire week and wrote a shining/gleaming letter to be put in my employment file... but corperate asshats refused to acknowlege that a nobody from the midwest division knew more than them. Most of the problems in companies that got nailed with the RPC hole worms was ignorance and apathy.. they do things "their way" and ignore anyone below them on the totem pole.. until the fire starts raging... My boss and many of us are starting to change corperate IT by throwing them under the bus at every chance.... It's the only solution we can see to fix the problem. I know plenty who do... (Score:5, Insightful) In the real world, where I work, I run a Hybrid network where I'm still waiting for Windows XP Service Pack 2 to come out in a finalized form because I don't have an option to pull just the parts that I need, and SP2 RC2 is not quite ready to unleash on my network (although I have actively TESTED it). Of course, this just fixes some vulnerabilities that have existed for over a year. Don't tell me that I, as a Windows User and Administrator, don't care. While I've ignored this kernel issue over the weekend, I get to actively compile come kernel patches and test those. I'll bet, even before my testing, that I'll be able to have a production solution by tomorrow. Even if SP2 releases this afternoon, I'll still have to test it before deployment, so the Linux solution will be in production first. Re:There's a big difference... (Score:5, Interesting) Sudden thought - is there much of a Windows 'community', or has it all fragmented into myriad different areas? That's possibly one aspect in security that's often overlooked; for instance, when the recent Mac OS X vulnerabilities became known, word went around the Mac community very quickly, and people discovered new aspects of the problems, created workarounds like Paranoid Android... There's something very similar with Linux as well - but is there a Windows equivalent of, say, Slashdot? Do Microsoft-oriented community discussion sites exist, complete with flamewars over widget styles in Microsoft Word, etc etc etc? Or do you have to be an underdog for such a thing to exist? Windows Community (Score:5, Interesting) It's good reading for anybody interested, however, unlike slashdot, registration is required. Re:There's a big difference... (Score:4, Interesting) Re:There's a big difference... (Score:4, Insightful) Windows users don't tend to care. They don't read Windows news sites daily, they don't subscribe to mailing lists that send out warnings as soon as a vunerability is found. They don't patch when Windows tells them to. You know why? They don't care, they don't want to "break" anything, or they don't even know that the little icon in their taskbar is any different from their 1000 other ones in the tray. The observation you make is correct. The group you apply it to is incorrectly targeted. Do you suppose that if all of the sudden the vast majority of these Windows users migrated to a more favored OS, they would magically read relevant OS news sites daily, subscribe to kernel mailing lists, and patch when their OS told them to? Of course not. Users are users. They're not interested in OS news or maintenance any more than they absolutely have to be (which, given the nature of modern technology, is practically nil). The fact that most computer users run Windows is largely an artifact of business dealings, not some concious decision on the part of the users. No, the way to solve such problems for the computer users of the world is by providing better defaults, ie, automatic patching turned on out of the box. If you're part of the tinfoil hat crowd, go ahead and turn off automatic patching. If you like to patch manually and can be trusted to do it, go ahead and turn it off. But if you're part of the unwashed masses, your computer just takes care of itself. Re:There's a big difference... (Score:3) The only difference is that the newer Linux installs ask you how you want the firewall configured (with a pretty secure setting as the "click next" default). While XP users are waiting for Service Pack 2. They DO care. But are afraid... (Score:3, Interesting) Lessons learned: (1) use Linux and keep it up-to-date with apt-get; (2) in the games partition which runs win Re:There's a big difference... (Score:3, Funny) Re:There's a big difference... (Score:4, Funny) ME of course, doesn't have to be secure, it will crash itself. XP with SP2 will start shipping within 6 weeks of final release. It's currently under Release Candidate status, meaning it should be no more than 10 years away. (That was very sarcastic, really it should be within the next 60 days unless something really bad happens with the test code). Re:There's a big difference... (Score:5, Interesting) Re:There's a big difference... (Score:5, Insightful) That said, it does seem to be true that a Linux patch will appear a lot more quickly than an MS patch, and that seems to be a result of the fact that it's open source. Re:There's a big difference... (Score:5, Informative) Yea, the only difference is that in OSS the steps are usually covered in about a third the time. This hit the kernel-list dated 2004-06-09 21:02:57 . It is now 2004-06-14 09:41:12 in my neck of the woods, and it is patched. The last update mentioned on the article's page is yesterday. It would appear the patch was available in no more than 4 days. It takes more than four days for a lot of vendors just to look at the goddamn report. Then they spend the next week hoping it goes away on it's own. Then they ignore the follow ups. Two months later when the submitter has had enough, they go to FULL DISCLOSURE and the vendor gets pissed off and starts attacking the person who reported it for not giving them enough time to write a patch they haven't even started on. Then they spend another month making lousy excuses for why it's not a serious issue and half assed suggestions of what you can turn off to avoid the problem. Finally, after about four months of hand wringing, press releases, and general bullshit, you might get a patch. If you're lucky, it won't require you to start the process over again by introducing a brand new vulnerability. If you're lucky. There's a huge difference here. The Linux folks jumped up and solved the problem. They didn't sit around pissing on their hands for months and making excuses like a lot of vendors do. Re:There's a big difference... (Score:5, Informative) "WHAT YOU SAY!?" I run a corporate network without a firewall. Every time a major issue comes around and destroys every freaking company around me, I go by with maybe two systems effected. Why? I stay up-to-date on all patches, and I keep relatively SANE security policies in place. A firewall is a lot less necessary than firewall vendors would have you believe. My experience is that firewalls breed a false sense of security. Someone goes home over the weekend with a laptop - and comes back with a zombie virus/worm/etc. that goes and infects everything while the IT department is "taking their time" evaluating a security update for a month (I do 24 hour tests). Why not firewall, is the other thing I hear. Mostly, it's so that every one of my systems can be an internet service provider. That's what the internet is about [fourmilab.ch]. Enabling users to say, hey - I've got that file right here on my local FTP, come get it. Here, log onto my VNC desktop, and I'll show you. Firewalls create industries like WebEx. Because technology has come from 'wow, I didn't know you could do that,' to, 'I didn't know you could do that because I'm firewalled.' Finally, "It doesn't happen very often," quite clearly means that it has happened. Call it pre-teen style bitching if you will, but a lawsuit should have never been threatened (AFAIK, a lawsuit never actually went to court). Is someone finds a vulnerability, full disclosure should not be the only method to have Microsoft take you seriously. My teen years are LONG behind me, maybe I'm just sick of having to deal with Microsoft's crap since Windows for Workgroups 3.11 (when the problems started for me). Re:There's a big difference... (Score:3, Funny) Now that you really plug for it, though, wasn't there a guy in France who was on the run for publishing exploits in common Anti-Virus software? Slashdot even had a story about him. I tried googling, but "France antivirus vulnerability author" doesn't quite match the pages that I wanted. Googling for "framed because proprietary software companies are opportunistic pigs" doesn't quite get it either. Windows is obviously superior (Score:4, Funny) The best way to avoid this bug (Score:5, Funny) Re:The best way to avoid this bug (Score:3, Interesting) Re:The best way to avoid this bug (Score:5, Insightful) Wait, (Score:5, Funny) In case of slashdotting (Score:5, Funny) int main(void) { printf("I love Windows\n"); return (0); } This is another reason why C should be deprecated (Score:5, Funny):This is another reason why C should be deprecat (Score:4, Insightful) Re:This is another reason why C should be deprecat (Score:3, Insightful) Also, in light of recent events concerning the ADTI 'Samizdat' book & the author getting Tanenbaum's nationality wrong, describing Linus Torvalds as a Swede is a masterstroke. Re:This is another reason why C should be deprecat (Score:4, Informative) Some people are pedantic about these sorts of things. Personally my only spelling pet peeve is seeing people use 'alot'. if you're running 2.4.25 or 2.4.26 (Score:4, Informative) not whoring [slashdot.org]. Re:if you're running 2.4.25 or 2.4.26 (Score:3, Informative) I suppose it is not a problem since I don't allow shell access to my machines, but I guess it wouldn't hurt to patch anyway. The problem appears to be... (Score:5, Informative) Some questions I have to those who may have been following this: Does the crash occur without the syscalls in the signal handler/main process? Does the crash occur on SMP machines? Does the crash occur with other signals (PIPE, USR1, etc.) Does the crash occur on ppc, sparc, etc? I read the article too, I'm an idiot. (Score:5, Informative) So itanium, ppc, etc. are safe. But my other questions still remain. Note that the person who reported the bug thought they were triggering a gcc bug. As it turns out, he munged his FPU assembly instructions. The GCC people rightly told him to contact the lkml... it's definitely an exception handling issue. IGNORE above ... new info. (Score:5, Informative) The issue isn't that the context is gone... the issue is that the kernel is executing a non-waiting FPU instruction i.e. "fwait" on returning from the a context that flushes a user thread (i.e. return from signal handler, syscall after execve). Triggers the FPE, except the kernel isn't set up to handle FPEs properly from kernel space in this case. The problem is that the TS flag is set because it's switching tasks, so it receives a different exception, trap 7 (device_not_available). The purpose of that exception is to signal the kernel that a newly created process wants the FPU. So it attempts to set up the FPU... which ends up calling __clear_fpu again... heh... and the original exception isn't cleared yet... whoops. What's really weird is I found this document [polimi.it], which details the potential problems of trying to use the FPU in a interrupt handler in the Linux kernel. They brought up the potential of triggering this EXACT PROBLEM... quote "endless trap 7 activation"... only in this case they're talking about writing an interrupt routine, not returning from a signal handler. Still, they already discovered this misbehavior... Well, you can't really call it that, though. It's was sort of by design (to make task switching faster). But the thing is you have to be ABSOLUTELY SURE that you never raise an FPE when TS is set, and you're NOT a user thread. That's what gets you burned here. Re:Not all... (read for more info) (Score:3, Funny) No. The proof is left as an exercise to the reader. Who has shell access? (Score:5, Funny) Re:Who has shell access? (Score:5, Insightful) Re:Who has shell access? (Score:3, Insightful) It already has a program running on it that I had to develop to detect processes using too much processor time and kill them (with warnings, messages printe dout when students log in and so on). I'll probably have to upgrade it to do the same with memory now that we have one genius who seems to be finding a way to consume 1.8Gb Re:Who has shell access? (Score:3, Insightful) Don't kill it, renice it. It'll still run, but it'll cede the processor to other apps when they need it. Also, ulimit can handle limiting memory. Re:Who has shell access? (Score:3, Informative) Re:Who has shell access? (Score:3, Informative) Re:Who has shell access? (Score:3, Interesting) Re:Who has shell access? (Score:4, Funny) I used the search term "shell accounts", incase you couldn't think of something more relevant than "cheese" or "striped cow" to search for.... SCO (Score:3, Funny) Okay, I'm confused... (Score:5, Funny) Re:Okay, I'm confused... (Score:4, Funny) You know you have problems if... (Score:5, Funny) If your system is a production server with 1000 on line users then do not test this code on that box. You do NOT need shell access (Score:3, Informative) Vulnerability in Linux, NetBSD Unaffected!!! (Score:5, Funny) ``This doesn't affect NetBSD Stable.'' The exploit code also doesn't work on Windows 95, nor on Menuet. I haven't tested SkyOS, because I don't have a license. Re:Vulnerability in Linux, NetBSD Unaffected!!! (Score:5, Funny) 2.6.5 not really affected but acting odd (Score:3, Interesting) UML? (Score:5, Interesting) If this were to be run on a UML session, what would happen? Would the damage be limited to that UML session, or would the host machine go down? Re:UML? (Score:4, Informative) Says session just dies. Host is OK. Re:UML? (Score:3, Interesting) Rus Older gcc-versions also vulnerable (Score:3, Informative) I think we're forgetting one important thing.... (Score:5, Funny) Know what else (Score:4, Insightful) You know what else makes the kernel crash? At least if you are using 2.6.5 or higher if you enable APIC/APIC-IO and you have an nforce chipset the system will lock up as soon as you do too much I/O. Patch doesn't work for me, 2.4.26 (Score:5, Interesting) What does the patch fix? (Score:5, Interesting) This may well protect against the example exploit, but what happens if you get a floating-point exception in the handler for some other signal? The provided patch does not look like a real fix, unless the deeper bug really does just involve sig==8. Re:What does the patch fix? (Score:4, Interesting) And nobody ever said bandaids were bad, right? Although Windows is Easier to apply patches to... (Score:5, Interesting) FYI suse 9.1 not vulnerable (Score:5, Informative) However, it does not do so on suse linux 9.1 - it creates an unkillable process, but the system continues to run normally. It's funny (Score:3, Interesting) another way to fix the problem... (Score:5, Funny) #include #include #include static void Handler(int ignore) { char fpubuf[108]; write(2, "*", 1); } int main(int argc, char *argv[]) { struct itimerval spec; signal(SIGALRM, Handler); spec.it_interval.tv_sec=0; spec.it_interval.tv_usec=100; spec.it_value.tv_sec=0; spec.it_value.tv_usec=100; setitimer(ITIMER_REAL, &spec, NULL); while(1) write(1, ".", 1); return 0; } by simply commenting out the inline assembly, i fixed crash.c so it can no longer crash Linux! 1 2 1 2 THE NAKEN CREW THIS is why I hate Linux (Score:5, Funny) How am I supposed to keep up with this stuff? this *is* a big deal (Score:4, Interesting) 1. There *was no patch*. Some systems were immune, but that was completely by chance. 2. There is a patch *now*, but the article also says people are already using the thing to crash free shell providers on day 0. 3. The patch, at this point, requires a kernel recompile. Not everyone running linux knows how to do that. Many who do are too lazy. Don't give me some shit about how everyone running linux is so 1337 that they will be sure the have already patched their system. I know you. You aren't that 1337. 4. Yes, this *is* a big deal. We were caught with our pants down, plain and simple. This *is* worse than any windows security issue that has come up in a long time. 5. Please *do* compile the demo code against your system and test it. If your system crashes, please patch. Don't act like many and just ignore this, especially if you are running a server or anything that stays connected for any amount of time. It also might be a good idea to turn off your telnet and ssh daemon (yes, even ssh) until you patch. 6. If you are *not* running linux or not running on x86, it might also be a good idea to test the demo code against your system. If you are running windows, some versions of windows *do* support possix to a limited degree. The code *might* compile. Then there is also, cygwin. This is probably a bug specific to linux x86, but it won't hurt to check. Re:Open Source Community shows its Value (Score:5, Funny) Let's just hope they're not browsing for pr0n. Re: My Experience with the Linux (Score:5, Funny) I think you'll need to clarify that for us slashdot folk. Re:OS bugs are like golf... (Score:5, Insightful) Re:OS bugs are like golf... (Score:3, Interesting) This was made worse by the fact that many people run as admin and IIS used to run as LocalSystem on default installs. However all software has bugs; this incident is neither proof positive or proof negative of any argument re: open source vs closed source. Re:OS bugs are like golf... (Score:3, Interesting) Yes, and who says these aren't present on Linux systems? Do you claim that all Linux distros have been as heavily assaulted as Windows, and kept up? I don't think so, and therefore I don't think we can say anything about the security of a Linux + libs + apps system. Re:OS bugs are like golf... (Score:5, Funny) Linux trolls: Windows sucks!!! Slashdot blurb about Linux bug Linux trolls: Windows sucks!!! Re:Fixed quickly. (Score:4, Insightful) The same cannot be said of many proprietary OSes... The fact that a patch is available doesn't mean that it is a non-issue. In many cases system administrators are too busy, lasy or do not wish to interrupt services, to update their systems to fix these software vulnerabilities. The proprietary vs. non-proprietary argument is irrelevant if administrators fail to keep up-to-date with security fixes. A good example of this was the SQL Slammer worm that made it's rounds several months after a patch that fixed it's attack vector was released. Simply put, the bigger problem is with the wet-ware than the development methodology. Re:Fixed quickly. (Score:3, Interesting) Also, how will I be to apply the patch and where is it? Do I have to recompile my kernel? If this were a Windows bug, it would have been thoroughly exploited, made the news, and I would have already applied the patch by clicking "Windows Update". A bigger deal would have been made of it, but it would have only taken about a minute of my time. I do prefer Linux, but we need to be open-minded. Re:Fixed quickly. (Score:4, Informative) Re:Fixed quickly. (Score:5, Informative) Here [theaimsgroup.com] is the LKML discussion thread on the subject. It's an interesting bug, briefly summarised by Matt Mackall as follows: So there's a bit of a massive problem with FPU exception handling, which didn't come to light before. Wheee. Fun. Re:Fixed quickly. (Score:5, Interesting) Re:Similar windows problem (Score:4, Interesting) So, it's been fixed in XP SP1. Months after the flaw was reported, and with a woefully incorrect knowledge base article too. Also, it hasn't been fixed in NT4, and it hasn't officially been fixed in 2000 either, although it seemed to go away after Win2K SP3. Re:A good time to disable compiler access (Score:5, Insightful) Re:A good time to disable compiler access (Score:5, Insightful) I don't think this idea is useful. Re:This is the best they can come up with? (Score:5, Insightful) Hitting ctrl-alt-delete or the power requires physical access, which shell users almost never have (I don't even know where most of the computers I use every day are - they could be in Timbuktu for all I care). Re:Not news. (Score:4, Informative) help ulimit [CORRECTION] Re:RHEL3 doesn't crash (Score:5, Informative) My test was on a dual P4 (hyperthreading). Running a single instance of the code only locked a single cpu. I just played with it again, and running 4 instances locked the box. So RHEL3 is vulnerable, and a correct description of the problem is that the exploit locks up 1 cpu in an endless loop that cannot be stopped. For systems with multiple CPUs, you have to do this once for each cpu (twice for each physical cpu if hyperthreading) in order to lock the whole box up.
https://slashdot.org/story/04/06/14/118209/new-linux-kernel-crash-exploit-discovered
CC-MAIN-2017-26
en
refinedweb
//This code developed by Ramy Mahrous //ramyamahrous@hotmail.com //Its contents is provided "as is", without warranty. /// <summary> /// Represents Student object /// </summary> public class Student { int id; public int ID { get { return id; } set { id = value; } } string firstName; public string FirstName { get { return firstName; } set { firstName = value; } } string lastName; public string LastName { get { return lastName; } set { lastName = value; } } public string FullName { get { return firstName + " " + lastName; } } public Student(int id, string firstName, string lastName) { this.id = id; this.firstName = firstName; this.lastName = lastName; } } /// <summary> /// Binds collection of students to ComboBox /// </summary> public void Bind() { Student[] students = new Student[6]; students[0] = new Student(1, "Ramy", "Mahrous"); students[1] = new Student(2, "FCI", "Helwan"); students[2] = new Student(3, "Danny", ""); students[3] = new Student(4, "Serkan", "Sendur"); students[4] = new Student(5, "Scott", ""); students[5] = new Student(6, "adatapost", "Y"); comboBox1.Items.AddRange(students); comboBox1.DataSource = students; comboBox1.ValueMember = "ID"; comboBox1.DisplayMember = "FullName"; } Are you able to help answer this sponsored question? Questions asked by members who have earned a lot of community kudos are featured in order to give back and encourage quality replies.
https://www.daniweb.com/programming/software-development/code/217418/how-to-bind-combobox-to-an-array-of-object
CC-MAIN-2017-26
en
refinedweb
This notebook originally appeared as a post on the blog Pythonic Perambulations. The content is BSD licensed. An oft-repeated rule of thumb in any sort of statistical model fitting is "you can't fit a model with more parameters than data points". This idea appears to be as wide-spread as it is incorrect. On the contrary, if you construct your models carefully, you can fit models with more parameters than datapoints, and this is much more than mere trivia with which you can impress the nerdiest of your friends: as I will show here, this fact can prove to be very useful in real-world scientific applications. A model with more parameters than datapoints is known as an under-determined system, my posts on the subject). Finally, I'll discuss some practical examples of where such an underdetermined model can be useful, and demonstrate one of these examples: quantitatively accounting for measurement biases in scientific data. While the model complexity myth is not true in general, it is true in the specific case of simple linear models, which perhaps explains why the myth is so pervasive. In this section I first want to motivate the reason for the underdetermination issue in simple linear models, first from an intuitive view, and then from a more mathematical view. I'll start by defining some Python functions to create plots for the examples below; you can skip reading this code block for now: # Code to create figures %matplotlib inline import matplotlib.pyplot as plt import numpy as np plt.style.use('ggplot') def plot_simple_line(): rng = np.random.RandomState(42) x = 10 * rng.rand(20) y = 2 * x + 5 + rng.randn(20) p = np.polyfit(x, y, 1) xfit = np.linspace(0, 10) yfit = np.polyval(p, xfit) plt.plot(x, y, 'ok') plt.plot(xfit, yfit, color='gray') plt.text(9.8, 1, "y = {0:.2f}x + {1:.2f}".format(*p), ha='right', size=14); def plot_underdetermined_fits(p, brange=(-0.5, 1.5), xlim=(-3, 3), plot_conditioned=False): rng = np.random.RandomState(42) x, y = rng.rand(2, p).round(2) xfit = np.linspace(xlim[0], xlim[1]) for r in rng.rand(20): # add a datapoint to make model specified b = brange[0] + r * (brange[1] - brange[0]) xx = np.concatenate([x, [0]]) yy = np.concatenate([y, [b]]) theta = np.polyfit(xx, yy, p) yfit = np.polyval(theta, xfit) plt.plot(xfit, yfit, color='#BBBBBB') plt.plot(x, y, 'ok') if plot_conditioned: X = x[:, None] ** np.arange(p + 1) theta = np.linalg.solve(np.dot(X.T, X) + 1E-3 * np.eye(X.shape[1]), np.dot(X.T, y)) Xfit = xfit[:, None] ** np.arange(p + 1) yfit = np.dot(Xfit, theta) plt.plot(xfit, yfit, color='black', lw=2) def plot_underdetermined_line(): plot_underdetermined_fits(1) def plot_underdetermined_cubic(): plot_underdetermined_fits(3, brange=(-1, 2), xlim=(0, 1.2)) def plot_conditioned_line(): plot_underdetermined_fits(1, plot_conditioned=True) The archetypical model-fitting problem is that of fitting a line to data: A straight-line fit is one of the simplest of linear models, and is usually specified by two parameters: the slope m and intercept b. For any observed value $x$, the model prediction for $y$ under the model $M$ is given by$$ y_M = m x + b $$ for some particular choice of $m$ and $b$. Given $N$ obverved data points $\{x_i, y_i\}_{y=1}^N$, it is straightforward (see below) to compute optimal values for $m$ and $b$ which fit this data: plot_simple_line() The simple line-plus-intercept is a two-parameter model, and it becomes underdetermined when fitting it to fewer than two datapoints. This is easy to understand intuitively: after all, you can draw any number of perfectly-fit lines through a single data point: plot_underdetermined_line() The single point simply isn't enough to pin-down both a slope and an intercept, and the model has no unique solution. While it's harder to see intuitively, this same notion extends to simple linear models with more terms. For example, let's think consider fitting a general cubic curve to data. In this case our model is$$ y_M = \theta_0 + \theta_1 x + \theta_2 x^2 + \theta_3 x^3 $$ Note that this is still a linear model: the "linear" refers to the linearity of model parameters $\theta$ rather than linearity of the dependence on the data $x$. Our cubic model is a four-parameter linear model, and just as a two-parameter model is underdetermined for fewer than two points, a four-parameter model is underdetermined for fewer than four points. For example, here are some of the possible solutions of the cubic model fit to three randomly-chosen points: plot_underdetermined_cubic() For any such simple linear model, an underdetermined system will lead to a similar result: an infinite set of best-fit solutions. To make more progress here, let's quickly dive into the mathematics behind these linear models. Going back to the simple straight-line fit, we have our model$$ y_M(x~|~\theta) = \theta_0 + \theta_1 x $$ where we've replaced our slope $m$ and intercept $b$ by a more generalizable parameter vector $\theta = [\theta_0, \theta_1]$. Given some set of data $\{x_n, y_n\}_{n=1}^N$ we'd like to find $\theta$ which gives the best fit. For reasons I'll not discuss here, this is usually done by minimizing the sum of squared residuals from each data point, often called the $\chi^2$ of the model in reference to its expected theoretical distribution:$$ \chi^2 = \sum_{n=1}^N [y_n - y_M(x_n~|~\theta)]^2 $$ We can make some progress by re-expressing this model in terms of matrices and vectors; we'll define the vector of $y$ values:$$ y = [y_1, y_2, y_3, \cdots y_N] $$ We'll also define the design matrix $X$; this contains all the information about the form of the model:$$ X = \left[ \begin{array}{ll} 1 & x_1 \\ 1 & x_2 \\ \vdots &\vdots \\ 1 & x_N \\ \end{array} \right] $$ With this formalism, the vector of model values can be expressed as a matrix-vector product:$$ y_M = X\theta $$ and the $\chi^2$ can be expressed as a simple linear product as well:$$ \chi^2 = (y - X\theta)^T(y - X\theta) $$ We'd like to minimize the $\chi^2$ with respect to the parameter vector $\theta$, which we can do by the normal means of differentiating with respect to the vector $\theta$ and setting the result to zero (yes, you can take the derivative with respect to a vector!):$$ \frac{d\chi^2}{d\theta} = -2X^T(y - X\theta) = 0 $$ Solving this for $\theta$ gives the Maximum Likelihood Estimate (MLE) for the parameters,$$ \hat{\theta}_{MLE} = [X^T X]^{-1} X^T y $$ Though this matrix formalism may seem a bit over-complicated, the nice part is that it straightforwardly generalizes to a host of more sophisticated linear models. For example, the cubic model considered above requires only a larger design matrix $X$:$$ X = \left[ \begin{array}{llll} 1 & x_1 & x_1^2 & x_1^3\\ 1 & x_2 & x_2^2 & x_2^3\\ \vdots & \vdots & \vdots & \vdots\\ 1 & x_N & x_N^2 & x_N^3\\ \end{array} \right] $$ The added model complexity is completely encapsulated in the design matrix, and the expression to compute $\hat{\theta}_{MLE}$ from $X$ is unchanged! Taking a look at this Maximum Likelihood solution for $\theta$, we see that there is only one place that it might go wrong: the inversion of the matrix $X^T X$. If this matrix is not invertible (i.e. if it is a singular matrix) then the maximum likelihood solution will not be well-defined. The number of rows in $X$ corresponds to the number of data points, and the number of columns in $X$ corresponds to the number of parameters in the model. It turns out that a matrix $C = X^TX$ will always be singular if $X$ has fewer rows than columns, and this is the source of the problem. For underdetermined models, $X^TX$ is a singular matrix, and so the maximum likelihood fit is not well-defined. Let's take a look at this in the case of fitting a line to the single point shown above, $(x=0.37, y=0.95)$. For this value of $x$, here is the design matrix: X = np.array([[1, 0.37]]) We can use this to compute the normal matrix, which is the standard name for $X^TX$: C = np.dot(X.T, X) If we try to invert this, we will get an error telling us the matrix is singular: np.linalg.inv(C) --------------------------------------------------------------------------- LinAlgError Traceback (most recent call last) <ipython-input-7-a6d66d9f99af> in <module>() ----> 1 np.linalg.inv(C) /Users/jakevdp/anaconda/envs/py34/lib/python3.4/site-packages/numpy/linalg/linalg.py in inv(a) 518 signature = 'D->D' if isComplexType(t) else 'd->d' 519 extobj = get_linalg_error_extobj(_raise_linalgerror_singular) --> 520 ainv = _umath_linalg.inv(a, signature=signature, extobj=extobj) 521 return wrap(ainv.astype(result_t)) 522 /Users/jakevdp/anaconda/envs/py34/lib/python3.4/site-packages/numpy/linalg/linalg.py in _raise_linalgerror_singular(err, flag) 88 89 def _raise_linalgerror_singular(err, flag): ---> 90 raise LinAlgError("Singular matrix") 91 92 def _raise_linalgerror_nonposdef(err, flag): LinAlgError: Singular matrix Evidently, if we want to fix the underdetermined model, we'll need to figure out how to modify the normal matrix so it is no longer singular. One easy way to make a singular matrix invertible is to condition it: that is, you add to it some multiple of the identity matrix before performing the inversion (in many ways this is equivalent to "fixing" a divide-by-zero error by adding a small value to the denominator). Mathematically, that looks like this:$$ C = X^TX + \sigma I $$ For example, by adding $\sigma = 10^{-3}$ to the diagonal of the normal matrix, we condition the matrix so that it can be inverted: cond = 1E-3 * np.eye(2) np.linalg.inv(C + cond) array([[ 121.18815362, -325.16038316], [-325.16038316, 879.69065823]]) Carrying this conditioned inverse through the computation, we get the following intercept and slope for our underdetermined problem: b, m = np.linalg.solve(C + cond, np.dot(X.T, [0.95])) print("Conditioned best-fit model:") print("y = {0:.3f} x + {1:.3f}".format(m, b)) Conditioned best-fit model: y = 0.309 x + 0.835 plot_conditioned_line() This conditioning caused the model to settle on a particular one of the infinite possibilities for a perfect fit to the data. Numerically we have fixed our issue, but this arbitrary conditioning is more than a bit suspect: why is this particular result chosen, and what does it actually mean in terms of our model fit? In the next two sections, we will briefly discuss the meaning of this conditioning term from both a frequentist and Bayesian perspective. In a frequentist approach, this type of conditioning is known as regularization. Regularization is motivated by a desire to penalize large values of model parameters. For example, in the underdetermined fit above (with $(x, y) = (0.37, 0.95)$), you could fit the data perfectly with a slope of one billion and an intercept near negative 370 million, but in most real-world applications this would be a silly fit. To prevent this sort of canceling parameter divergence, in a frequentist setting you can "regularize" the model by adding a penalty term to the $\chi^2$; one popular choice is a penalty term proportional to the sum of squares of the model parameters themselves:$$ \chi^2_{reg} = \chi^2 + \lambda~\theta^T\theta $$ Here $\lambda$ is the degree of regularization, which must be chosen by the person implementing the model. Using the expression for the regularized $\chi^2$, we can minimize with respect to $\theta$ by again taking the derivative and setting it equal to zero:$$ \frac{d\chi^2}{d\theta} = -2[X^T(y - X\theta) - \lambda\theta] = 0 $$ This leads to the following regularized maximum likelihood estimate for $\theta$:$$ \hat{\theta}_{MLE} = [X^TX + \lambda I]^{-1} X^T y $$ Comparing this to our conditioning above, we see that the regularization degree $\lambda$ is identical to the conditioning term $\sigma$ that we considered above. That is, regulariation of this form is nothing more than a simple conditioning of $X^T X$, with $\lambda = \sigma$. The result of this conditioning is to push the absolute values of the parameters toward zero, and in the process make an ill-defined problem solvable. I'll add that the above form of regularization is known variably as L2-regularization or Ridge Regularization, and is only one of the possible regularization approaches. Another useful form of regularization is L1-regularization, also known as Lasso Regularization, which has the interesting property that it favors sparsity in the model. Regularization illuminates the meaning of matrix conditioning, but it still sometimes seems like a bit of black magic. What does this penalization of model parameters within the $\chi^2$ actually mean? Here, we can make progress in understanding the problem by examining regularization from a Bayesian perspective. As I pointed out in my series of posts on Frequentism and Bayesianism, for many simple problems, the frequentist likelihood (proportional to the negative exponent of the $\chi^2$) is equivalent to the Bayesian posterior (albeit with a subtlely but fundamentally different interpretation). The Bayesian posterior probability on the model parameters $\theta$ is given by$$ P(\theta~|~D, M) = \frac{P(D~|~\theta, M) P(\theta~|~M)}{P(D~|~M)} $$ where the most important terms are the likelihood $P(D~|~\theta, M)$ and the prior $P(\theta~|~M)$. From the expected correspondence with the frequentist result, we can write:$$ P(D~|~\theta, M) P(\theta~|~M) \propto \exp[- \chi^2] $$ Because the term on the right-hand-side has no $\theta$ dependence, we can immediately see that$$ P(D~|~\theta, M) \propto \exp[-\chi^2]\\ P(\theta~|~M) \propto 1 $$ That is, the simple frequentist likelihood is equivalent to the Bayesian posterior for the model with an implicit flat prior on $\theta$. To understand the meaning of regularization, let's repeat this exercise with the regularized $\chi^2$:$$ P(D~|~\theta, M) P(\theta~|~M) \propto \exp[- \chi^2 - \lambda~|\theta|^2] $$ The regularization term in the $\chi^2$ becomes a second term in the product which depends only on $\theta$, thus we can immediately write$$ P(D~|~\theta, M) \propto \exp[-\chi^2]\\ P(\theta~|~M) \propto \exp[- \lambda~|\theta|^2] $$ So we see that ridge regularization is equivalent to applying a Gaussian prior to your model parameters, centered at $\theta=0$ and with a width $\sigma_P = (2\lambda)^{-2}$. This insight lifts the cover off the black box of regularization, and reveals that it is simply a roundabout way of adding a Bayesian prior within a frequentist paradigm. The stronger the regularization, the narrower the implicit Gaussian prior is. Returning to our single-point example above, we can quickly see how this intuition explains the particular model chosen by the regularized fit; it is equivalent to fitting the line while taking into account prior knowledge that both the intercept and slope should be near zero: plot_conditioned_line() The benefit of the Bayesian view is that it helps us understand exactly what this conditioning means for our model, and given this understanding we can easily extend use more general priors. For example, what if you have reason to believe your slope is near 1, but have no prior information on your intercept? In the Bayesian approach, it is easy to add such information to your model in a rigorous way. But regardless of which approach you use, this central fact remains clear: you can fit models with more parameters than data points, if you restrict your parameter space through the use of frequentist regularization or Bayesian priors. One area where underdetermined models are often used is in the area of Compressed Sensing. Compressed sensing comprises a set of models in which underdetermined linear systems are solved using a sparsity prior, the classic example of which is the reconstruction of a full image from just a handful of its pixels. As a simple linear model this would fail, because there are far more unknown pixels than known pixels. But by carefully training a model on the structure of typical images and applying priors based on sparsity, this seemingly impossible problem becomes tractable. This 2010 Wired article has a good popular-level discussion of the technique and its applications, and includes this image showing how a partially-hidden input image can be iteratively reconstructed from a handful of pixels using a sparsity prior: from IPython.display import Image Image('') Another area where a classically underdetermined model is solved is in the case of Gaussian Process Regression. Gaussian Processes are an interesting beast, and one way to view them is that rather than fitting, say, a two-parameter line or four-parameter cubic curve, they actually fit an infinite-dimensional model to data! They accomplish this by judicious use of certain priors on the model, along with a so-called "kernel trick" which solves the infinite dimensional regression implicitly using a finite-dimensional representation constructed based on these priors. In my opinion, the best resource to learn more about Gaussian Process methods is the Gaussian Processes for Machine Learning book, which is available for free online (though it is a bit on the technical side). You might also take a look at the scikit-learn Gaussian Process documentation. If you'd like to experiment with a fast and flexible Gaussian Process implementation in Python, check out the george library. Another place I've seen effective use of the above ideas is in situations where the data collection process has unavoidable imperfections or biases. There are two basic ways forward when working with such noisy and biased data: If you'd like to see a great example of this style of forward-modeling analysis, check out the efforts of David Hogg's group in finding extrasolar planets in the Kepler survey's K2 data; there's a nice astrobytes post which summarizes some of these results. While the group hasn't yet published any results based on truly underdetermined models, it is easy to imagine how this style of comprehensive forward-modeling analysis could be pushed to such an extreme. While it might be fun to dive into the details of models for noisy exoplanet searches, I'll defer that to the experts. Instead, as a more approachable example of an underdetermined model, I'll revisit a toy example in which a classically underdetermined model is used to account for imperfections in the input data. Imagine you have some observed data you would like to model, but you know that your detector is flawed such that some observations (you don't know which) will have a bias that is not reflected in the estimated error: in short, there are outliers in your data. How can you fit a model to this data while accounting for the possibility of such outliers? To make this more concrete, consider the following data, which is drawn from a line with noise, and includes several outliers: rng = np.random.RandomState(42) theta = [-10, 2] x = 10 * rng.rand(10) dy = 2 + 2 * rng.rand(10) y = theta[0] + theta[1] * x + dy * rng.randn(10) y[4] += 15 y[7] -= 10 plt.errorbar(x, y, dy, fmt='ok', ecolor='gray'); If we try to fit a line using the standard $\chi^2$ minimization approach, we will find an obviously biased result: from scipy import optimize def chi2(theta, x=x, y=y, dy=dy): y_model = theta[0] + theta[1] * x return np.sum(0.5 * ((y - y_model) / dy) ** 2) theta1 = optimize.fmin(chi2, [0, 0], disp=False) xfit = np.linspace(0, 10) plt.errorbar(x, y, dy, fmt='ok', ecolor='gray') plt.plot(xfit, theta1[0] + theta1[1] * xfit, '-k') plt.title('Standard $\chi^2$ Minimization'); This reflects a well-known deficiency of $\chi^2$ minimization: it is not robust to the presence of outliers. What we would like to do is propose a model which somehow accounts for the possibility that each of these points may be the result of a biased measurement. One possible route is to add $N$ new model parameters: one associated with each point which indicates whether it is an outlier or not. If it is an outlier, we use the standard model likelihood; if not, we use a likelihood with a much larger error. The result for our straight-line fit will be a model with $N + 2$ parameters, where $N$ is the number of data points. An overzealous application of lessons from simple linear models might lead you to believe this model can't be solved. But, if carefully constructed, it can! Let's see how it can be done. Our linear model is:$$ y_M(x~|~\theta) = \theta_0 + \theta_1 x $$ For a non-outlier (let's call it an "inlier") point at $x$, $y$, with error on $y$ given by $dy$, the likelihood is$$ L_{in, i}(D~|~\theta) = \frac{1}{\sqrt{2\pi dy_i^2}} \exp\frac{-[y_i - y_M(x_i~|~\theta)]^2}{2 dy_i^2} $$ For an "outlier" point, the likelihood is$$ L_{out, i}(D~|~\theta) = \frac{1}{\sqrt{2\pi \sigma_y^2}} \exp\frac{-[y_i - y_M(x_i~|~\theta)]^2}{2 \sigma_y^2} $$ where $\sigma_y$ is the standard deviation of the $y$ data: note that the only difference between the "inlier" and "outlier" likelihood is the width of the Gaussian distribution. Now we'll specify $N$ additional binary model parameters $\{g_i\}_{i=1}^N$ which indicate whether point $i$ is an outlier $(g_i = 1)$ or an inlier $(g_i = 0)$. With this, the overall Likelihood becomes:$$ L(D~|~\theta, g) = \prod_i \left[(1 - g_i)~L_{in, i} + g_i~L_{out, i}\right] $$ We will put a prior on these indicator variables $g$ which encourages sparsity of outliers; this can be accomplished with a simple L1 prior, which penalizes the sum of the $g$ terms:$$ P(g) = \exp\left[-\sum_i g_i\right] $$ where, recall, $g_i \in \{0, 1\}$. Though you could likely solve for a point estimate of this model, I find the Bayesian approach to be much more straightforward and interpretable for a model this complex. To fit this, I'll make use of the excellent emcee package. Because emcee doesn't have categorical variables, we'll instead allow $g_i$ to range continuously between 0 and 1, so that any single point will be some mixture of "outlier" and "inlier". We start by defining a function which computes the log-posterior given the data and model parameters, using some computational tricks for the sake of floating-point accuracy: # theta will be an array of length 2 + N, where N is the number of points # theta[0] is the intercept, theta[1] is the slope, # and theta[2 + i] is the weight g_i def log_prior(theta): g = theta[2:] #g_i needs to be between 0 and 1 if (np.any(g < 0) or np.any(g > 1)): return -np.inf # recall log(0) = -inf else: return -g.sum() def log_likelihood(theta, x, y, dy): sigma_y = np.std(y) y_model = theta[0] + theta[1] * x g = np.clip(theta[2:], 0, 1) # g<0 or g>1 leads to NaNs in logarithm # log-likelihood for in-lier logL_in = -0.5 * (np.log(2 * np.pi * dy ** 2) + ((y - y_model) / dy)** 2) # log-likelihood for outlier logL_out = -0.5 * (np.log(2 * np.pi * sigma_y ** 2) + ((y - y_model) / sigma_y) ** 2) return np.sum(np.logaddexp(np.log(1 - g) + logL_in, np.log(g) + logL_out)) def log_posterior(theta, x, y, dy): return log_prior(theta) + log_likelihood(theta, x, y, dy) Now we use the emcee package to run this model. Note that because of the high dimensionality of the model, the run_mcmc command below will take a couple minutes to complete: import emcee ndim = 2 + len(x) # number of parameters in the model nwalkers = 50 # number of MCMC walkers nburn = 10000 # "burn-in" period to let chains stabilize nsteps = 15000 # number of MCMC steps to take # set walkers near the maximum likelihood # adding some random scatter rng = np.random.RandomState(0) starting_guesses = np.zeros((nwalkers, ndim)) starting_guesses[:, :2] = rng.normal(theta1, 1, (nwalkers, 2)) starting_guesses[:, 2:] = rng.normal(0.5, 0.1, (nwalkers, ndim - 2)) sampler = emcee.EnsembleSampler(nwalkers, ndim, log_posterior, args=[x, y, dy]) sampler.run_mcmc(starting_guesses, nsteps) sample = sampler.chain # shape = (nwalkers, nsteps, ndim) sample = sampler.chain[:, nburn:, :].reshape(-1, ndim) -c:21: RuntimeWarning: divide by zero encountered in log -c:22: RuntimeWarning: divide by zero encountered in log The Runtime warnings here are normal – they just indicate that we've hit log(0) = -inf for some pieces of the calculation. With the sample chain determined, we can plot the marginalized distribution of samples to get an idea of the value and uncertainty of slope and intercept with this model: plt.plot(sample[:, 0], sample[:, 1], ',k', alpha=0.1) plt.xlabel('intercept') plt.ylabel('slope'); These points describe the marginalized posterior distribution of the slope and intercept for our model given the data. Finally, we can make use of the marginalized values of all $N + 2$ parameters and plot both the best-fit model, along with a model-derived indication of whether each point is an outlier: theta2 = np.mean(sample[:, :2], 0) g = np.mean(sample[:, 2:], 0) outliers = (g > 0.5) plt.errorbar(x, y, dy, fmt='ok', ecolor='gray') plt.plot(xfit, theta1[0] + theta1[1] * xfit, color='gray') plt.plot(xfit, theta2[0] + theta2[1] * xfit, color='black') plt.plot(x[outliers], y[outliers], 'ro', ms=20, mfc='none', mec='red') plt.title('Bayesian Fit'); The red circles mark the points that were determined to be outliers by our model, and the black line shows the marginalized best-fit slope and intercept. For comparison, the grey line is the standard maximum likelihood fit. Notice that we have successfully fit an $(N + 2)$-parameter model to $N$ data points, and the best-fit parameters are actually meaningful in a deep way – the $N$ extra parameters give us individual estimates of whether each of the $N$ data points has misreported errors. I think this is a striking example of the practical use of a model which might seem impossible under the model complexity myth! I hope you will see after reading this post that the model complexity myth, while rooted in a solid understanding of simple linear models, should not be assumed to apply to all possible models. In fact, it is possible to fit models with more parameters than datapoints: and for the types of noisy, biased, and heterogeneous data we often encounter in scientific research, you can make a lot of progress by taking advantage of such models. Thanks for reading!
http://nbviewer.jupyter.org/url/jakevdp.github.io/downloads/notebooks/ModelComplexityMyth.ipynb
CC-MAIN-2017-26
en
refinedweb
- Author: - bahoo - Posted: - October 7, 2010 - Language: - Python - Version: - 1.2 - context_processors mobile - Score: - 2 (after 2 ratings) For those interested in making a mobile site geared toward the higher end devices, and wanting a little leverage over device-specific quirks. These are the big players in the U.S. market, but of course, add your own User-Agents to match your audience's popular browsers. Usage: <html class="{{ device.classes }}"> You can also leverage template logic: {% if device.iphone %} <p>You are browsing on {% if device.iphone = "iphone4" %} iPhone 4 {% else %} an iPhone pre-version 4{% endif %} </p> {% endif %} Very dangerous to have a search without an if or similar statement as in: device['iphone'] = "iphone" + re.search("iphone os (\d)", ua).groups(0)[0]. This made my webapp crash because the HTTP_USER_AGENT was not written exactly how the search was set up # Please login first before commenting.
https://djangosnippets.org/snippets/2228/
CC-MAIN-2020-24
en
refinedweb
#include <CGAL/Shape_detection/Efficient_RANSAC/Efficient_RANSAC.h> Shape detection algorithm based on the RANSAC [2]. Registers the shape type ShapeType in the detection engine that must inherit from Shape_base. For example, for registering a plane as detectable shape, you should call ransac.add_shape_factory< Shape_detection::Plane<Traits> >();. Note that if your call is within a template, you should add the template keyword just before add_shape_factory: ransac.template add_shape_factory< Shape_detection::Plane<Traits> >();. Calls clear_octrees() and removes all detected shapes. All internal structures are cleaned, including formerly detected shapes. Thus iterators and ranges retrieved through shapes(), planes() and indices_of_unassigned_points() are invalidated. Frees memory allocated for the internal search structures but keeps the detected shapes. It invalidates the range retrieved using unassigned_points(). Performs the shape detection. Shape types considered during the detection are those registered using add_shape_factory(). trueif shape types have been registered and input data has been set. Otherwise, falseis returned. Returns an Iterator_range with a bidirectional iterator with value type boost::shared_ptr<Plane_shape> over only the detected planes in the order of detection. Depending on the chosen probability for the detection, the planes are ordered with decreasing size. Sets the input data. The range must stay valid until the detection has been performed and the access to the results is no longer required. The data in the input is reordered by the methods detect() and preprocess(). This function first calls clear(). Returns an Iterator_range with a bidirectional iterator with value type boost::shared_ptr<Shape> over the detected shapes in the order of detection. Depending on the chosen probability for the detection, the shapes are ordered with decreasing size.
https://doc.cgal.org/latest/Shape_detection/classCGAL_1_1Shape__detection_1_1Efficient__RANSAC.html
CC-MAIN-2020-24
en
refinedweb
If your list contains items, which may change after the initial list has been loaded, it may be good idea to allow the users to refresh the list. That is easy with the SwipeRefreshBehavior. Simply add an instance of this behavior to your list view and you will get a nice indicator that will be shown when the user swipes the list from its top. If you have read the Getting Started page, you already have a project with RadListView which is populated with items of type City. In the Behaviors Overview we introduced the behaviors and now we will go into more details about the SwipeRefreshBehavior. Here's how to add the SwipeRefreshBehavior to your list view instance: SwipeRefreshBehavior swipeRefreshBehavior = new SwipeRefreshBehavior(); listView.addBehavior(swipeRefreshBehavior); SwipeRefreshBehavior swipeRefreshBehavior = new SwipeRefreshBehavior (); listView.AddBehavior (swipeRefreshBehavior); This will show a loading indicator when the user swipes from the top of the list, but in order to actually refresh the list you will need to add a SwipeRefreshListener. The SwipeRefreshListener should be used to get notification that refresh is requested. Here's one simple implementation: SwipeRefreshBehavior.SwipeRefreshListener swipeRefreshListener = new SwipeRefreshBehavior.SwipeRefreshListener() { @Override public void onRefreshRequested() { cityAdapter.refreshList(); cityAdapter.notifyRefreshFinished(); } }; public class SwipeListener : Java.Lang.Object, SwipeRefreshBehavior.ISwipeRefreshListener { private CityAdapter cityAdapter; public SwipeListener(CityAdapter adapter) { cityAdapter = adapter; } public void OnRefreshRequested () { cityAdapter.RefreshList (); cityAdapter.NotifyRefreshFinished (); } } You will also need to create the refreshList() method in your adapter in order to actually refresh the list. That method's implementation will depend on the way you load your data, but for this example we can leave its body empty with the presumption that our initial list will not change over the time. Pay attention to the call of the notifyRefreshFinished() which is one of the options to notify the behavior that the refresh operation is complete. The other option is to call SwipeRefreshBehavior's endRefresh() method and the effect will be the same — the loading indicator will hide. Now we can add the listener to our behavior: swipeRefreshBehavior.addListener(swipeRefreshListener); SwipeListener swipeRefreshListener = new SwipeListener (); swipeRefreshBehavior.AddListener(swipeRefreshListener) The indicator that this behavior uses is actually the SwipeRefreshLayout from the support library. You can get the instance of this layout with the swipeRefresh() method in case you need to apply any customizations to this layout, for example, change the indicator's color.
https://docs.telerik.com/devtools/android/controls/listview/behaviors/listview-behaviors-swiperefresh
CC-MAIN-2020-24
en
refinedweb
HTML: @Bean public ViewResolver viewResolver() { InternalResourceViewResolver viewResolver = new InternalResourceViewResolver(); viewResolver.setPrefix("/WEB-INF/templates/"); viewResolver.setViewClass(StringTemplateView.class); viewResolver.setSuffix(".st"); return viewResolver; } And after some guidance from Jim we changed StringTemplateView to look like this: public class StringTemplateView extends InternalResourceView { @Override protected void renderMergedOutputModel(Map<String, Object> model, HttpServletRequest request, HttpServletResponse response) throws Exception { String templateRootDir = format("%s/WEB-INF/templates", getServletContext().getRealPath("/")); StringTemplateGroup group = new StringTemplateGroup("view", templateRootDir); StringTemplate template = group.getInstanceOf(getBeanName()); AttributeRenderer htmlEncodedRenderer = new HtmlEncodedRenderer(); template.registerRenderer(String.class, htmlEncodedRenderer); ... } private class HtmlEncodedRenderer implements AttributeRenderer { @Override public String toString(Object o) { return HtmlUtils.htmlEscape(o.toString()); } @Override public String toString(Object o, String formatName) { return HtmlUtils.htmlEscape(o.toString()); } } }.
https://markhneedham.com/blog/2011/04/09/html-encodingescaping-with-stringtemplate-and-spring-mvc/
CC-MAIN-2020-24
en
refinedweb
In this article, we will learn about the solution to the problem statement given below. Problem statement − We are given a number n, we need to print all primes smaller than or equal to n. Constraint: n is a small number. Now let’s observe the solution in the implementation below − def SieveOfEratosthenes(n): # array of type boolean with True values in it prime = [True for i in range(n + 1)] p = 2 while (p * p <= n): # If it remain unchanged it is prime if (prime[p] == True): # updating all the multiples for i in range(p * 2, n + 1, p): prime[i] = False p += 1 prime[0]= False prime[1]= False # Print for p in range(n + 1): if prime[p]: print (p,end=" ") # main if __name__=='__main__': n = 33 print ("The prime numbers smaller than or equal to", n,"is") SieveOfEratosthenes(n) The prime numbers smaller than or equal to 33 is 2 3 5 7 11 13 17 19 23 29 31 All the variables are declared in the local scope and their references are seen in the figure above. In this article, we have learned about how we can make a Python Program for Sieve of Eratosthenes
https://www.tutorialspoint.com/python-program-for-sieve-of-eratosthenes
CC-MAIN-2020-24
en
refinedweb
Recently we discuss about Camera Fragment Library which can control all your need on Camera but today we are going to see the Android Fragment Rigger which is the library to manage fragments at the least cost of use.. Feature of Android Fragment Rigger - Powerful api - Enough English notes - Strictest exceptions - Resolve usual exceptions and bugs in fragments - Never lost any fragment transaction commit - Extend the android native fragment methods,add some usual methods such as onBackPressed() - Print tree for the fragment stack - Fragment lazy load - Fragment transition animations - Fragment shared elements transition animations See this : Top 35 Android Loading Animation examples with source code 1Support - FragmentRigger only supports android.support.v4.app.Fragmentand android.support.v4.app.FragmentActivity.This library is not supported as your Fragment/Activityis not extend those classes. - FragmentRigger support SDK12+. - FragmentRigger support Javalanguage for now. Kylinwill be supported in future. 2Installation This library is powered by AspectJ,you must config the AspectJ library if you wanna to use this library. Add to build.gradle in root project buildscript { dependencies { ... classpath 'com.hujiang.aspectjx:gradle-android-plugin-aspectjx:1.0.10' } } allprojects { repositories { ... maven { url '' } } } Add to build.gradle in application module apply plugin: 'android-aspectjx' android{ ... } Add to build.gradle in library module compile 'com.justkiddingbaby:fragment-rigger:1.0.0' 3How to use? FragmentRigger does not need extend any class,all operation is depend on the proxy class Rigger to manage fragments. Add Fragment/Activity support Add @Puppet annotation on your Fragment/Activity,but your fragment must is the child class of android.support.v4.app.Fragment and your activity must is the child class of android.support.v4.app.FragmentActivity. @Puppet public class ExampleFragment extends Fragment @Puppet(containerViewId = R.id.atyContent) public class ExampleActivity extends AppCompatActivity You can use this library on the class that is added @Puppet annotation @Puppet If you wanna to use this library,@puppet is necessary condition. But this annotation has some params you need to know. You must see this : How to Add Wave Sidebar Animation Android? containerViewIdcontainerViewId Optional identifier of the container this fragment is to be placed in. If 0, it will not be placed in a container. default value is 0. This params will be used in method Rigger#startFragment bondContainerViewbondContainerView bondContainerView is a boolean object. - if the value is true: the puppet will be closed as the top fragment in puppet’s stack is closing. the top fragment in stack will not perform transition animation. - if the valye is false: the puppet will do nothing as the top fragment in puppet’s stack is closing. the top fragment in stack will perform transition animation and closing. This params will be used as the host Activity#onBackPressed is called. 4Fragment usage Android native fragment operate a series of methods to manage fragment, such as add/ show/ replace/ remove, and native Fragment provide FragmentManager and FragmenmtTransaction to let us use fragments. But we often encounter all kinds of problems when we are using FragmentTransaction. This library can make you use fragment easier. replace method(ReplaceFragment.java) replace method is actually add + show method, you can make sure one container only contain one Fragment when you are using this method. This method might is the easist method to use fragment, and it is probably one of the easiest ways to get wrong. Rigger.getRigger(this).replaceFragment(fragment, R.id.fr_content); Rigger.getRigger(params),the params is the Activity/Fragmentclass marked by @Puppet. replaceFragment(@NonNull Fragment fragment, @IdRes int containerViewId)has two parameters, the first parameter is the fragment to be replaced, the second parameter is the container view id for the fragment to be placed in. When you are using this method, the fragment is placed in containerView will be removed, and the fragment to be replaced will be called add and show transaction. show method(ShowFragment.java) show method has multiple methods, you can add multiple fragments or use fragment by tag use this library. Type one: Add fragments first and show by fragment object Fragment fragments[] = new Fragment[4]; Rigger.getRigger(this).addFragment(containerViewId, fragments); Rigger.getRigger(this).show(fragments[0]); Type two: Add fragments first and show by fragment tag Fragment fragments[] = new Fragment[4]; Rigger.getRigger(this).addFragment(containerViewId, fragments); String tag = Rigger.getRigger(fragment[0]).getFragmentTag(); Rigger.getRigger(this).show(tag); Type three: Add single fragment and show Rigger.getRigger(this).addFragment(fragment, containerViewId); The fragments placed in container view will be hidden but not remove, you can show a fragment by fragment object or fragment tag. Remove from stack This library provide support for Fragment remove from stack, the default operation is the onBackPressed method is called, beside, you can use close() method to remove a fragment from stack. The default operation is remove the pop fragment in stack and show the next pop fragment. Rigger.getRigger(this).close(); Fragment is a part of view for the host, so we need choose if the host should be closed as the stack is only contain one fragment, now,the second parameter bondContainerView in @Puppetannotation will be used. - bondContainerView = true : the hostthat contain only one fragment in stack will be finishor remove from stackas onBackPressed()method is called, and the last pop fragment in stack does not perform the transition animation. - bondContainerView = false: the hostthat stack is empty will finishor remove from stackand all fragments in stack will perform the transition animation. 5Download this project Please comment us and share with your friends !! Share your thoughts
http://www.tellmehow.co/add-android-fragment-rigger/
CC-MAIN-2020-24
en
refinedweb
I have a problem using Resharper Solution-wide analysis and code completion features when woking on .ascx controls. My control is located in separate project in the same solution. All view files (aspx and ascx) are xcopied to target directory as a post-build task. The project is of class library type, but the problem exists if it is Web project, too. In my control .ascx file I'm specifying from what the control inherits and trying to use view model like this: <%@ Assembly Name="WebEngine.CommonViews" %> <%@ Assembly Name="WebEngine.Infrastructure" %> <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<WebEngine.Infrastructure.SettingsViewModel<System.Collections.Generic.IEnumerable<WebEngine.Infrastructure.News>, WebEngine.Infrastructure.NewsListSettings>>" %> Assembly names, namespaces, type names are correct, the project references needed assemblies and everything compiles and runs OK. Resharper marks Control tag red with error "Cannot resolve symbol". And all code references in the .ascx file are red, too, so I have no code completion at all. I am using JetBrains ReSharper 5.1 C# Edition Build 5.1.1753.4 on 2010-10-15T16:51:30 Am I doing something wrong? Is it known issue? Is there any workaround for the problem? Thanks in advance. Hello, Could you please attach a small sample solution demonstrating this behavior? Thank you! Andrey Serebryansky Senior Support Engineer JetBrains, Inc "Develop with pleasure!" Here is the simple MVC application with controllers and models in main project and views moved to separate one. The view marked as error is in ExternalViews - Views\Shared\NewsList.ascx. I've noticed that ReSharper already understands references in my .ascx, but it has a problem with generic ones. I can specify the control like that: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl" %> and there's no error, but I can't do it like that: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<System.Collections.Generic.IEnumerable<string>>" %> When this view is in the main project, there's no error. I believe that post-build copying view files is not a standard way of doing things, but it works and is helpful in my scenario, so it will be great if ReSharper would support it. Attachment(s): ResharperTest.zip The same is when I replace post-build copying with embedding view files in resources. Resharper can't understand referenced generic types in external .aspx/.ascx files and mark it as errors, even if it already knows all the referenced assemblies. Hello, This seems to be a bug, so I've created a bug-report in our tracker: and you can vote for it. Thank you! Andrey Serebryansky Senior Support Engineer JetBrains, Inc "Develop with pleasure!"
https://resharper-support.jetbrains.com/hc/en-us/community/posts/206679905-Solution-wide-analysis-gets-lost-with-ascx-in-different-project
CC-MAIN-2020-24
en
refinedweb
we could use baixar dragon vpn a static IP address here, and it saves us editing this file each time we want to use the connection. But this is useful because many users IPs are prone to change, Baixar dragon vpn "Guides FAQ" section. Configure the bot as baixar dragon vpn usual and enjoy! CHANGELOG Initial release! IMPORTANT! (Current version: 1.12b)) HOW TO USE THE BOT. You can find anything you need on the mBot forum, add vpnv6 uni! Route-policy RP_IPV4_BGP_LU_OUT if community matches-any (ios-regex _.:1 then pass endif end-policy!) route-policy RP_PASSALL pass end-policy! Vrf TOR2 mpls activate interface GigabitEthernet.11! Router static vrf TOR2 address-family ipv4 unicast vpn details free /32 GigabitEthernet.11!!! Router bgp 65000 bgp router-id bgp log neigh chan det add vpnv4 uni! : This value is used in level of service and capacity analysis. The equivalency is dependent upon size, weight, and operating characteristics of the large vehicle, and the design speed and gradient of the highway. Anchor: #i1010024 passenger trip A passenger trip is the number of. India: Baixar dragon vpn! 11) However, congress passed the last update Monday Holiday Law, how to openweb vpn for George Washingtons birthday was originally celebrated on February baixar dragon vpn 22 for. In 1968, on more than a century (though records indicate he was born on Feb.) visit the. Host Name User baixar dragon vpn Agent Your Host Name: how to download via proxy server m Your User Agent: Mozilla/4.0 MSIE 4.5. For even more information, more Info About You page. . AirVPN up against NordVPN. In this contest, Ill be looking at the 10 most critical categories to pay attention to when deciding on a. VPN service. Ill explain why each category is critical, reveal how well both providers performed and then declare a winner for. dedicated connection from your infrastructure into AWS. AWS baixar dragon vpn Direct Connect bypasses the public Internet and establishes a secure, aWS Direct Connect. samsung Smart TV DNS addresses of this baixar dragon vpn tutorial. Go step-by-step through following instructions: Part I. Smart DNS manually, change your. Validate Your IP Address. If you have already validated your IP address go straight to the Part II.check point VPN setup in windows 10 Checkpoint VPN connection fails with build 10049 Please baixar dragon vpn remember to mark the replies as answers if they help,although the Samsung Smart TV is unable to directly connect to a Virtual Private Network. As a result, vPN services can baixar dragon vpn also be used to unlock Netflix, pSV. Read more baixar dragon vpn NETGEAR.. XR500,this is not hard, this results in problems for companies that only know how to vpn jailbreak ios 9 run Microsoft Windows based systems, as suddenly they are going to need to be able to run a Unix or Linux system for a new application.there are also many aggregators, some are offered by companies to promote other paid Internet services. Some even charge a subscription baixar dragon vpn fee to provide easy access to the services listed. Choosing a proxy service Many proxy services are free. Few free services, most are ad supported in one way or another. Providing constantly updated lists of free proxy servers. Such as Proxy 4 Free, Cisco vpn tunnel troubleshoot: aDSL bonding technology allowing multiple ADSL lines to be bonded to create larger internet baixar dragon vpn connections. With the use of multiple Internet connections you can create a single virtual connection. INS offers advanced.pIA (Private Internet Access)), baixar dragon vpn some are better than others. There are many different VPN providers. They are both competitively priced with good speeds. PIA is slightly cheaper but has less gateways. The two which we would recommend are. And IP Vanish. logging in to your router is very simple and easy and you will see that in a baixar dragon vpn minute. Public IP address is being used. Believe us, well, for a public access a so called. How to use? Although it already looks complicated,so I can t actually fiddle with the knobs, iPSec over ADSL Need expert opinion. Odd situation as we have outsourced IT infrastructure baixar dragon vpn to a local IT firm,( ) . , . so don't worry about that if you see it the next time you open the window) Press vpn free ios 10 Generate Shared Secret and then Save Configuration. (This always defaults to Feitian Serial for some reason,) 02:27 PM #1 My brother has set up a VPN connection with my iPad 2 few weeks ago. As this setup is no longer needed, what should I do baixar dragon vpn for that? I want to remove the corresponding configuration from my iPad.it is more in depth and baixar dragon vpn connects at start up with no trouble at all.proxy IP:Port Response Time. Here we provide free HTTP proxy lists full of IP addresses that you can freely download and use. If you want more than HTTP proxies, you can buy proxy list for a very reliable price baixar dragon vpn of 6.55 per month. Proxy IP List - Download Proxy List - USA Proxy List 3128. A paid VPN service with dedicated new IPs for each of your connections and the highest anonymous, openElec v7 onwards and LibreElec v5 onwards already include OpenVPN. This can be found in the Unofficial OpenElec repository ikev2 vpn server setup which baixar dragon vpn sits in the repository category of the official OpenElec repository. If you have previous versions then you will need to install OpenVPN.
http://babyonboard.in/tipps/baixar-dragon-vpn.html
CC-MAIN-2019-39
en
refinedweb
Intel Parallel Studio for improving OpenCV I would like to contribute to OpenCV making a parallel version (using OpenMP) of SURF. Yes, yes, I know, SURF in OpenCV is already "parallel" and it uses Intel TBB, but come on it scales horribly, I'm sure some improvement can be done for sure ;) I want to use Intel Parallel Studio tools (e.g. VTune Amplifier, Intel Advisor and Intel Inspector) for making this process not a nightmare and make the paralelization easier. However, for some of them (Intel Advisor for sure) we need to include some libraries in the src code (e.g. #include <advisor-annotate.h>) and of course editing the makefile. For example, this is the compile command for one of the Intel Advisor samples using in the tutorial: g++ -m64 nqueens_serial.cpp -o 1_nqueens_serial -O2 -g -I /opt/intel/advisor_2017.1.1.486553/include/ -ldl Now there is my question (and I'm afraid that the answer will be painful): OpenCV uses CMAKE and I never developed something using it (only used it for building) and I have no idea how could I integrate some Intel includes and flags necessary for this kind of profiling/improvements in the CMAKE files or if this is even possible. Someone can help me or give me some guidelines? ... a wasps nest just like you say. Without a decent knowledge of the inner CMAKE pipeline this will be hell ... So happy to hear the opencv is so easy to contribute (lol). I think that "start googling about learning CMAKE" is the best suggestion that you can give me @StevenPuttemans , right? Or do you have anything better? :) You are digging right into one of the most complex tasks in the OpenCV code i guess... There is a reason why i try to leave the cmake as is. But I guess learning CMAKE basics is indeed a good start.
https://answers.opencv.org/question/124716/intel-parallel-studio-for-improving-opencv/
CC-MAIN-2019-39
en
refinedweb
This article came up from the need to have a guidance step-by-step for Azure beginners on the IoT integration and from the guidance from Ranga Vadlamudi Go to the Azure Portal, click Create New Resource and type Iot Hub, Select IoT Hub and click create Select Create IoT hub Now we will create a Time Series Insight service. Go to the Azure Portal, click on Create New Resource and type Time Series Insights as shown below: Then select Time Series Insights: Click create: Now we will provide the environment parameters as follows: **Note: In case we are using an Azure Pass, please review the cost $150 per month! Then Check the pin to the dashboard and click create. It is important to have a consumer group for TSI (Time Series Insights), go to the IoT Hub, click Endpoints, select Events Now we will create a New Consumer Group and then click save: Now go back to Time Series Insights, click on Event Sources and Click on Add: We will provide the following parameters to create the Event Source: Click create Now go to overview and click go to the environment: **Note: We will add data access policies later. We will not be able to access the data in the environment with no data access policies defined. We will see a screen like below: Now go back to the Azure Portal, we will create a Stream analytics Job. Go to Create New Resource, type Stream analytics: Select stream analytics job: Provide the following parameters: Note: if already connected the device to Wifi we could try Cloud, otherwise try using Edge and connect the device to Wifi later on Now we will create a cosmos db account. Select create new resource, type cosmos db, select Azure Cosmos DB: Click create: Provide the following parameters: Click Create It is important to highlight that we should try to deploy all the resources in the same location to avoid future issues within the service availability Now we will create a Logic App. Go to Create new Resource, type logic app Select Logic App: Click on Logic App, Create: Now provide the following parameters: Now go back to the azure stream analytics job, from the blade select Input: The input for this will be the IoTHub we created in the resource group Select Add stream input, then select IoT Hub: Provide the following parameters: Now we will generate the Output. This output will be the Cosmos DB we created in the resource group Go to the Stream Job blade and select Output: Now click Add and select Cosmos DB. Please Note: If the chosen resource and the stream analytics job are located in different regions, we will be billed to move data between regions. Now we will define a Stream Job Function. Go to the stream job blade and click on Overview, then click on Edit query: Copy and paste the following query: SELECT [deviceId] AS [deviceId], avg(temperature) AS avgtemp INTO CosmosDB FROM mvp-iothub GROUP BY deviceId, TumblingWindow(second, 15); Now click save: Now we will Create an Event Hub which will receive data from Azure Function. TIP: Take a note of keys and name. Go to create Resources and type Event hub: Now select the event hubs and click create: Provide the following parameters and click create: Once provisioned we will create a new event hub: Also, create a SAS policy for this event hub: Now Create an Azure Function. Go to create New Resource, type functions: Select Functions App: Now Provide the following parameters: Go to the Azure portal, we will select the function and click on Add then we will select Azure Cosmos DB Trigger: Now click on New and confirm this new account connection is related to our previously created Cosmos DB Replace with the following code: #r "D:\home\site\wwwroot\CosmosTriggerCSharp1\bin\Microsoft.Azure.Documents.Client.dll" #r "D:\home\site\wwwroot\CosmosTriggerCSharp1\bin\Microsoft.Azure.EventHubs.dll" using System.Collections.Generic; using System.Configuration; using System.Text; using Microsoft.Azure.Documents; using Microsoft.Azure.EventHubs; public static void Run(IReadOnlyList<Document> documents, TraceWriter log){ if (documents != null && documents.Count > 0) { string connectionString = ConfigurationManager.ConnectionStrings["EventHubConnection"].ConnectionString; log.Verbose(connectionString); var connectionStringBuilder = new EventHubsConnectionStringBuilder(connectionString) { EntityPath = "eventhub" }; var client = Microsoft.Azure.EventHubs.EventHubClient.CreateFromConnectionString(connectionStringBuilder.ToString()); foreach (var doc in documents) { string json = string.Format("{{\"iotid\":\"{0}\",\"temp\":{1}}}", doc.GetPropertyValue<string>("iotid"), doc.GetPropertyValue<string>("temp")); EventData data = new Microsoft.Azure.EventHubs.EventData(Encoding.UTF8.GetBytes(json)); client.SendAsync(data); } } } Now click on the function, then select platform features, then click on Kudu Console: We will be redirected to another tab in our browser to manage our Azure Function: Click on debug console, cmd: Navigate to function directory by double clicking on wwwroot and then create a bin directory using the “+” sign. Go to Bin directory by double clicking on it and then add the following DLL. Get these DLL from nuget Repeat this for each of the following: *Note: We can user 7zip and extract the nupkg to quickly get the DLLs. Once we downloaded these packages go back to AppSettings Click on AppSettings On another tab open the Event Hub and get the Connection String: Copy the connection-primary-key: Now go back to our Azure Function, select Application Settings and click on add connection string: Then provide a Connection Name, then paste the connection string and select Connection Type as Custom Now go up and click Save: Now go back to our Logic App. Select "When events are available in EventHub" Then provide a name and select the Event Hub. Now click Create. Now provide the interval and frequency to check for events coming in: Now Click new step then click add action Look for "Send an email" We will need to authorize our account, Office 365 for this case,then setup the details for the email:
https://social.technet.microsoft.com/wiki/contents/articles/51448.azure-iot-solution-integration-step-by-step-guide.aspx
CC-MAIN-2019-39
en
refinedweb
import "go.aporeto.io/trireme-lib/controller/internal/enforcer/applicationproxy" AppProxy maintains state for proxies connections from listen to backend. func NewAppProxy( tp tokenaccessor.TokenAccessor, c collector.EventCollector, puFromID cache.DataStore, certificate *tls.Certificate, s secrets.Secrets, t tcommon.ServiceTokenIssuer, ) (*AppProxy, error) NewAppProxy creates a new instance of the application proxy. Enforce implements enforcer.Enforcer interface. It will create the necessary proxies for the particular PU. Enforce can be called multiple times, once for every policy update. func (p *AppProxy) GetFilterQueue() *fqconfig.FilterQueue GetFilterQueue is a stub for TCP proxy Run starts all the network side proxies. Application side proxies will have to start during enforce in order to support multiple Linux processes. Unenforce implements enforcer.Enforcer interface. It will shutdown the app side of the proxy. UpdateSecrets updates the secrets of running enforcers managed by trireme. Remote enforcers will get the secret updates with the next policy push. type ServerInterface interface { RunNetworkServer(ctx context.Context, l net.Listener, encrypted bool) error UpdateSecrets(cert *tls.Certificate, ca *x509.CertPool, secrets secrets.Secrets, certPEM, keyPEM string) ShutDown() error } ServerInterface describes the methods required by an application processor. Package applicationproxy imports 21 packages (graph) and is imported by 2 packages. Updated 2019-09-15. Refresh now. Tools for package owners.
https://godoc.org/go.aporeto.io/trireme-lib/controller/internal/enforcer/applicationproxy
CC-MAIN-2019-39
en
refinedweb
>>. Implementation. #include <signal.h> char *foo; void int_handler() { free(foo); _Exit(0); } int main(void) { foo = malloc(15); signal(SIGINT, int_handler); strcpy(foo, "Hello World."); puts(foo); free(foo); return 0; } Compliant Solution Signal handlers should be as minimal as possible, only unconditionally setting a flag where appropriate, and returning. You may also call the _Exit function to immediately terminate program execution. #include <signal.h> char *foo; void int_handler() { _Exit(0); } int main(void) { foo = malloc(15); signal(SIGINT, int_handler); strcpy(foo, "Hello World."); puts(foo); free(foo); return 0; } Risk Assessment Depending on the code, this could lead to any number of attacks, many of which could give root access. For an overview of some software vulnerabilities, see Zalewski's paper on understanding, exploiting and preventing signal-handling related vulnerabilities [[Zalewski 01]]. VU #834865 describes a vulnerability resulting from a violation of this rule. References [[ISO/IEC 03]] "Signals and Interrupts" [[Open Group 04]] longjmp [OpenBSD] signal() Man Page [[Zalewski 01]]
https://wiki.sei.cmu.edu/confluence/pages/viewpage.action?pageId=88020172
CC-MAIN-2019-39
en
refinedweb
Provided by: libbobcat-dev_2.20.01-1_amd64 NAME FBB::CmdFinder - Determine (member) function associated with a command SYNOPSIS #include <bobcat/cmdfinder> Linking option: -lbobcat DESCRIPTION Objects of the class CmdFinder determine which (member) function to call given a command. Although associations between commands and (member) functions are often defined in a switch, a switch is not the preferred way to define these associations because of the fect that the maintainability and clarity of switches suffer for even moderately large command sets. Moreover, the switch is hardly ever self-supporting, since usually some command-processing is required to determine command/case-value associations. The alternative (and preferred) approach, which is also taken by CmdFinder is to define an array of pointers to (member) functions, and to define the associations between commands and member functions as a mapping of commands to array indices. Plain associations between (textual) commands and functions to be called can also easily be defined using a std::map or other hash-type data structure. However, the syntactical requirements for such a std::map structure are non-trivial, and besides: user-entered commands often require some preprocessing before a command can be used as an index in a std::map. The class CmdFinder is an attempt to offer a versatile implementation of associations between commands and (member) functions. In particular, the class offers the following features: o Associations between textual commands and (member) functions are defined in a simple array of pairs: the first element defining a command, the second element containing the address of the function associated with the command. The function addresses may either be addresses of free or static member functions or they may be defined as member function addresses. o Commands may be used `as-is’, or the first word in a std::string may be used as the command; o Commands may be specified case sensitively or case insensitively; o Commands may have to be specified in full, or unique abbreviations of the commands may be accepted; o Several types are defined by the class CmdFinder, further simplifying the deriviation of classes from CmdFinder. The class CmdFinder itself is defined as a template class. This template class should be used as a base class of a user-defined derived class defining the array of command-function associations. The class CmdFinder itself is a derived class of the class CmdFinderBase, defining some template-independent functionality that is used by CmdFinder. The enumeration and member functions sections below also contain the members that are available to classes derived from CmdFinder, but which are actually defined in the class CmdFinderBase. NAMESPACE FBB All constructors, members, operators and manipulators, mentioned in this man-page, are defined in the namespace FBB. INHERITS FROM FBB::CmdFinderBase ENUMERATION The enumeration Mode is defined in the class CmdFinderBase. It contains the following values, which may be combined by the bit_or operator to specify the CmdFinder object’s required mode of operation: o USE_FIRST: This value can be specified when the first word (any white-space separated series of characters) of a provided textual command should be used as the command to find. Both the command that is used and any trailing information that may be present can be obtained from the CmdFinder object. By default, the complete contents of the a provided command is used. o UNIQUE: This value can be specified when any unique abbreviation of a command may be accepted. Assuming that the commands help and version are defined, then the following (non-exhaustive) series are all accepted as specifications of the help command if UNIQUE is specified: h, he, el, p. By default the command must match a command-key as found in the array of command-function associations exactly. o INSENSITIVE: When this value is specified, commands may be specified disregarding letter-casing. E.g., when INSENSITIVE is specified, both Help and HELP are recognized as help. By default, letter casing is obeyed. So, by default a full, literal match between provided command and predefined command-keys is required. TEMPLATE TYPE PARAMETER The template class CmdFinder has one template type parameter, which is the prototype of the functions defined in the array of command-function associations. This type becomes available as the typename FunctionPtr (defined by the class CmdFinder in the class that is derived from CmdFinder). PROTECTED DEFINED TYPES The following (protected) types are defined by the template class CmdFinder: o FunctionPtr: This type represents a pointer to the functions whose addresses are stored in the array of command-function associations. o Entry: This type represents the type std::pair<char const *, FunctionPtr>. Its first field is the name of a command, its second field is the function address associated with the command name. CONSTRUCTORS o CmdFinder<FunctionPtr>(Entry const *begin, Entry const *end, size_t mode = 0): This constructor is defined in the protected section of the CmdFinder class. Its parameters begin and end define the half-open range of Entry objects, defining the associations between commands and functions. The parameter begin should be initialized to the first element of an array of Entry objects, the parameter end must point just beyond the last element of the array. The parameter mode may be speified using any combination of values of the Mode enumeration, using the bit_or operator to combine multiple values. When a non-supported value is specified for mode, an FBB::Errno exception is thrown. o Note: There is no default constructor. Copy and move constructors are available. OVERLOADED OPERATORS The copy and move assignment operators are available. PUBLIC MEMBER FUNCTION o setMode(size_t mode): This member function (defined in the class CmdFinderBase) may be called to redefine the mode of the CmdFinder object. The mode parameter should be initialized subject to the same restrictions as mentioned with the CmdFinder’s constructor. PROTECTED MEMBER FUNCTIONS o std::string const &beyond() const: This member function returns the text that may have been entered beyond the command (if Mode value USE_FIRST was specified). It is empty if no text beyond the command was encountered. It is initially empty, and will be redefined at each call of findCmd() (see below). o std::string const &cmd() const: This member returns the original (untransformed) command as encountered by the CmdFinder object. It is initially empty, and will be redefined at each call of findCmd() (see below). object. o size_t count() const: This member function returns the number of commands matching the command that is passed to the function findCmd() (see below). Its return value is 0 when findCmd() hasn’t been called yet and is updated at each new call of findCmd(). o FunctionPtr findCmd(std::string const &cmd): Regarding the CmdFinder object’s mode setting, this function returns the address of the function to call given the provided command. By default, if no match was found, the address of the function stored in the last element of the array of command-function associations is returned (i.e, element end[-1]). PROTECTED DATA MEMBERS The class CmdFinder has access to some protected data members of the class CmdFinderBase, which should not be used or modified by classes derived from CmdFinder. EXAMPLE #include <iostream> #include <string> //#include <bobcat/cmdfinder> #include "../cmdfinder" using namespace std; using namespace FBB; // Define a class `Command’ in which the array s_action defines the // command-function associations. Command is derived from CmdFinder, // specifying the prototype of the member functions to be called class Command: public CmdFinder<bool (Command::*)() const> { static Entry s_action[]; bool add() const // some member functions { cout << "add called: command was `" << cmd() << "’\n"; if (beyond().length()) cout << "Beyond " << cmd() << " `" << beyond() << "’\n"; return true; } bool error() const { cout << "unrecognized command: `" << cmd() << "’ called\n" << count() << " matching alternatives found\n"; return true; } bool quit() const { cout << "quit called: quitting this series\n"; return false; } public: Command(); // Declare the default constructor bool run(std::string const &cmd) // run a command { return (this->*findCmd(cmd))(); // execute the command matching // ’cmd’ } }; // Define command-function associations. Note that the last is given an empty // command-text. This is not required, a command text could have been // specified for the last command as well. Command::Entry Command::s_action[] = { Entry("add", &Command::add), Entry("quit", &Command::quit), Entry("", &Command::error), }; // Define the default constructor Command::Command() // Define the default constructor : // Note the use of `FunctionPtr’ CmdFinder<FunctionPtr>(s_action, s_action + sizeof(s_action) / sizeof(Entry)) {} void run(Command &cmd, char const *descr, size_t mode = 0) { if (mode) cmd.setMode(mode); cout << "Enter 5 x a command using " << descr << ".\n"; for (size_t idx = 0; idx++ < 5; ) { cout << "Enter command " << idx << ": "; string text; getline(cin, text); if (!cmd.run(text)) // run a command break; } } int main() { Command cmd; // define a command // enter 5 commands using the default mode run (cmd, "the default mode"); run (cmd, "abbreviated commands", Command::UNIQUE); run (cmd, "abbreviated case-insensitive commands", Command::UNIQUE | Command::INSENSITIVE); run (cmd, "abbreviated command lines", Command::USE_FIRST | Command::UNIQUE); run (cmd, "abbreviated case-insensitive command lines", Command::USE_FIRST | Command::UNIQUE | Command::INSENSITIVE); return 0; } FILES bobcat/cmdfinder - defines the class interface bobcat/cmdfinderbase - defines the base class of CmdFinder. SEE ALSO bobcat(7), cmdfinderbase(3bobcat), errno).
http://manpages.ubuntu.com/manpages/precise/man3/cmdfinder.3bobcat.html
CC-MAIN-2019-39
en
refinedweb
Hide Forgot The XML for a JDBC Cache Store is its own XML Schema, as referenced in the community doc here: This documentation needs to be added to the enterprise documentation. Without adding the appropriate cache store XML namespace/schema to your cache configuration, you will get XML parse errors. In the 6.2 Admin Guide, Chapter has gives XML configurations that will not work without making the change to include multiple XML Schemas. For a specific example look at 15.2.2. JdbcStringBasedStore Configuration (Library Mode) The multiple XML schemas seems to be added in JDBC cache store Library configurations and not in server mode configs. The header of server mode configuration remains the same for configs with and without JDBC configured. John, am I correct with what I stated above? Also, there is no schemas found in 6.3 related ISPN repo. So I assume that this is not relevant for JDG 6.3. So my 2 queries are: 1. Should the multiple XML schemas be added in server mode configs as well? 2. Should the multiple XML schemas be added in JDG 6.3 docs or only in 6.2? I have resumed office from today. I consulted with Tomas about this and he will provide his findings about adding the XML schemas in 6.3 JDBC configs later today. Hey Bobb, I've confirmed working configuration for both JDG 6.2 and 6.3. Into header: <infinispan xmlns:xsi="" xsi:schemaLocation="urn:infinispan:config:6.0 urn:infinispan:config:jdbc:6.0" xmlns="urn:infinispan:config:6.0"> And add xmlns="urn:infinispan:config:jdbc:6.0" definition into specific jdbc cache store element -- like this: <stringKeyedJdbcStore xmlns="urn:infinispan:config:jdbc:6.0" fetchPersistentState="false" ignoreModifications="false" purgeOnStartup="false" key2StringMapper="org.infinispan.loaders.keymappers.DefaultTwoWayKey2StringMapper"> This configuration works for me with both 6.1.1.ER1-redhat-1 and 6.0.3.Final-redhat-3. Hope that helps. Tomas Hey Misha, could you please brew this and push it live? I'll push the 6.2.1 changes now and the 6.3 ones will go out with the async update so we'll close off this bug once both are released. Created ticket. This content is now available on
https://bugzilla.redhat.com/show_bug.cgi?id=1122298
CC-MAIN-2019-39
en
refinedweb
Technical Guest Post: "The things I've learned using Realm" It’s been exactly two years since I posted the article “How to use Realm like a champ”! In this article, I try to sum up all the things I’ve learned from using Realm, and how it all relates with the current landscape of Android local data persistence: Android Architecture Components, Android Jetpack, Room, LiveData, Paging, and so on. Stay tuned! An overview of Realm Realm debuted as a mobile-first database, and has since grown into its own full-scale data synchronization platform. From the get-go, Realm’s vision was to provide the illusion of working with regular local objects, while in reality enabling safe and performant concurrent sharing of data. On top of that, Realm intends to handle most complexities of concurrency, and when something changes in the database, we receive notifications so that we can update our views, to always reflect the latest state of data. With the Sync Platform, these changes could be committed across multiple devices, and get seamlessly merged, like with any local write. Personally, I’ve used the Realm Mobile Database in multiple applications, and if there’s one major thing it taught me, it is to prefer simplicity over complexity. Sometimes, we might be so used to doing something in some way, that we don’t even think of just how much easier it could be with a slightly different approach. In case of Realm, that different approach is observable queries, but of course, being able to define classes that are automatically mapped to database without having to think about relational mapping and joins is also a plus. Out of the box, with Realm, pagination from a local data source is also not a real problem, as data is only loaded when accessed, therefore whether it’s 200 items or 10000, we can’t run out of memory, and the query will generally be fast. A story of faulty abstractions It is commonly seen as a good practice to abstract away specific implementations under our own interfaces, so that if need be, they can be safely replaced with a different implementation. And if you ever truly do need to swap out one implementation for the other, it’s definitely helpful that changes are localized to a single “module”, instead of it influencing parts of the code all over the codebase. However, sometimes we think we’d like the module to behave one way, and enforce a contract that might not necessarily be the best solution to our problem. In this case, our interface restricts us from using a different approach. In the case of local datasources, the common “mistake” is that if we have a DAO layer, then the DAO must have methods akin to this: public interface CatDao { List<Cat> getCatsWithBrownFur(); } In which case we’ve hard-coded that we have no external arguments (and is intended to be a constructor argument of the dao, most likely a singleton instance), and that retrieval of data is single, synchronous fetch. Following up on that could be, for example, a Repository: public interface CatRepository { void getCatsWithBrownFur(OnDataLoaded<List<Cat>> callback); } In which case we’d ensure that the getCatsWithBrownFur() is most likely executed on a background thread, and will make its callback on the UI thread. The retrieval of data would be a single, asynchronous fetch. The data is loaded in its entirety to memory on a background thread, then passed to the UI thread for it to be rendered. But is this really the best thing we can do? What Realm’s queries look like Observable async queries, change listeners, and lazy loading In Realm, or at least in its Java API, you receive a so-called RealmResults, which implements List. So we might be tempted to just use it as a List and move on. But that’s actually not how it’s intended to be used: Realm updates any RealmResults after a change has been made to the database, and notifies its change listeners when that’s happened. So in practice, setting up a Realm query on the UI thread involves both defining the query, AND adding a listener to handle any future change that happens to it. private Realm realm; private RealmResults<Cat> cats; private OrderedRealmCollectionChangeListener<RealmResults<Cat>> realmChangeListener = (cats, changeSet) -> { adapter.updateData(cats, changeSet); // initial load + future changes }; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); realm = Realm.getDefaultInstance(); // ← open thread-local instance of Realm cats = realm.where(Cat.class).findAllAsync(); // ← execute Async query cats.addChangeListener(realmChangeListener); // ← listen for initial evaluation + changes } @Override protected void onDestroy() { super.onDestroy(); cats.removeAllChangeListeners(); // ← remove listeners realm.close(); // ← close thread-local instance of Realm } For those familiar with Swift, the API is slightly different, but it does the same thing: let realm = try! Realm() let results = try! Realm().objects(Cat.self) var notificationToken: NotificationToken? override func viewDidLoad() { super.viewDidLoad() // … self.notificationToken = results.observe { (changes: RealmCollectionChange) in // … (handle changes) } // call `notificationToken?.invalidate()` when needed } ``` But we’re getting lost in details. What’s different here, compared to the aforementioned common DAO approach? Well, pretty much everything. The common DAO pattern assumes that we want to call the DAO to refresh our data, whenever we know that any other parts of the application, on any other thread could have potentially changed the data we’re showing. That’s quite the responsibility! In case of Realm, we can observe the query results — in fact, Realm automatically updates it for us, whenever it’s been changed. In this case, our listener is called: and we even receive the positions of where items have been inserted, deleted, or modified. When we commit a transaction, Realm handles all of this for us in the background. Another thing to note is that when using Realm’s Async queries, the evaluation of the new result set and the diff happens on Realm’s background thread, and we’ll see the new data along with the changes only when the change listeners are being called. When we call an accessor on a RealmResults, we actually receive a proxy instance, that can read from and write to the data on the disk, essentially minimizing the amount of data read to memory. This eliminates the need for pagination, even when working with larger datasets. RealmResults<Cat> cats = realm.where(Cat.class).findAll(); // sync query for sake of example Cat cat = cats.get(0); // actually `fully_qualified_package_CatRealmProxy` String name = cat.getName(); // calling accessor reads the data from Realm The price of proxies: threading with Realm Anyone who’s used Realm knows that there is a price to pay with proxy objects and lazy results, which also extends to Realm’s thread-local instances. Namely, that while Realm handles concurrency internally, the objects/results/instances cannot be passed between threads. final Realm realm = Realm.getDefaultInstance(); Thread thread = new Thread(new Runnable() { @Override public void run() { RealmResults<Cat> cats = realm.where(Cat.class).findAll(); // <-- // IllegalStateException: Realm access from incorrect thread. // Managed objects, lists, results, and Realm instances can only be // accessed on the thread where they were created. } }).start(); I’ve often heard the claim that “threading with Realm is hard”. I firmly believe this claim is false. Even when there are multiple threads involved, using Realm is pretty easy — just get an instance of Realm for while you need it, then close it. // on a background thread try(Realm realm = Realm.getDefaultInstance()) { // open Realm for thread // use realm } // realm is closed by try-with-resources The one special case the developer has to remember is that instead of passing a managed object, you should generally pass its primary key, then re-query it on the background thread instead. final String idToChange = myCat.getId(); // create an asynchronous transaction // it will happen on background thread realm.executeTransactionAsync(bgRealm -> { // we need to find the Cat we want to modify // from the background thread’s Realm Cat cat = bgRealm.where(Cat.class) .equalTo(CatFields.ID, idToChange) .findFirst(); // do something with the cat }); However, that’s actually as far as complexity goes, when it comes to Realm and threading, out of the box. RealmResults are already evaluated automatically by Realm on a background thread, and then the data is passed to the listener on the UI thread, so there is zero reason to ever try to pass them between threads. Why people think threading with Realm is hard So why is the claim so common? After spending a lot of time on the realm tag on Stack Overflow, I believe it comes down to the following primary causes: 1.) The developer has introduced RxJava into their project without understanding it, and end up introducing such level of complexity that they don’t understand what threads their code is running on. 2.) The developer doesn’t know how RealmResults works, specifically that findAllAsync() will evaluate the results on a background thread, and then the RealmChangeListener will receive it on the UI thread — and instead they attempt to handle the threading that Realm would already handle for them internally. 3.) Reluctance to pass instance of Realm as a method argument, and/or intending to use Realm’s thread-local instances as global singleton. The first and second causes are self-explanatory. But what about the third? When you “get an instance of Realm” with Realm.getDefaultInstance(), you actually receive a thread-local and reference counted instance, where the reference count is managed by calls to getDefaultInstance() and close(). It is not a singleton, and it cannot be accessed on different threads. It’s a little known fact, but to avoid passing Realm and instead see it as a thread-local singleton, it’s possible to store it in a ThreadLocal Why do people need and want a singleton instance of Realm? To create the data layer implementation for their abstraction, of course! public interface CatDao { List<Cat> getCatsWithBrownFur(); // ← initial attempt: uses List (no changes!) and has no arguments // ------------- // alternatives? RealmResults<Cat> getCatsWithBrownFur(); // ← problem: Realm opened and closed could immediately // invalidate the results, Realm-specific RealmResults<Cat> getCatsWithBrownFur(Realm realm); // ← problem: We’re passing Realm as a dependency // as part of the contract, Realm-specific } So we’d like to create a contract that lets us hide Realm as an implementation detail, but takes it into consideration that data could be evaluated on a different thread, and is sent to us with some delay through a subscription (to a listener). And on top of that, any future writes to the database that invalidate our current dataset should trigger a retrieval of the new data, and notify us of change. public interface CatDao { LiveObservableList<Cat> getCatsWithBrownFur(); // it would be possible to write a wrapper // around RealmResults like this Observable<List<Cat>> getCatsWithBrownFur(); // it is possible to convert listeners to Rx Observable // something else??? } How Realm shaped the future of local data persistence on Android The introduction of Jetpack, Room, and LiveData With time, other libraries emerged that supported the notion of “observable queries”. One of the first to follow Realm was SQLBrite, which exposed SQLite queries as Observable<List Eventually, Google created their own approach, called the Android Architecture Components — which are now a part of Android Jetpack. The idea was to solve common problems that developers face when writing applications, and simplify them by providing a set of libraries and an opinionated guideline that helps solve these problems. Observable queries with Room and LiveData The most notable addition to the Architecture Components is LiveData. It is a “holder” that can store a particular item, and can have multiple observers. Whenever the data stored within the LiveData is changed, then the observers are notified, and they receive the new data. When a new observer subscribes for changes of the LiveData, it receives the previously set latest data. One of the biggest additions to the Architecture Components was Room, which is Google’s own ORM over SQLite. But what’s more interesting is that it allowed defining a DAO (for accessing the entity’s tables) like this: @Dao public interface CatDao { @Query(“SELECT * FROM CATS WHERE FUR = 'BROWN'”) LiveData<List<Cat>> getCatsWithBrownFur(); } We can then use this in our Activity: private LiveData<List<Cat>> cats; private Observer<List<Cat>> observer = (cats) -> { adapter.updateData(cats); // initial load + future changes }; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); CatDao catDao = RoomDatabase.getInstance() .catDao(); // get Dao cats = catDao.getCatsWithBrownFur(); // ← execute query cats.observe(this, observer); // ← listen for initial evaluation + changes } @Override protected void onDestroy() { super.onDestroy(); // no need to unsubscribe, // because of `observe(LifecycleOwner` } Doesn’t this look extremely familiar? Swap out Observer for RealmChangeListener, and LiveData<List When we subscribe for changes of a LiveData (or more-so, “start observing it”), then as there is at least one active observer, Room begins to evaluate the query on a background thread, then passes the results to the UI thread. Afterwards, Room tracks the invalidation of the table(s) this query belongs to, and if that table is modified, the query is re-evaluated. The key differences are that the diff calculation is moved to the adapter (see ListAdapter), and that LiveData’s lifecycle integration allows for automatic unsubscription of its observers, instead of having to do it explicitly. Otherwise, the behavior is rather similar, in fact, so similar that LiveData<List The downside of LiveData<List > In case of Room, there’s a downside to using LiveData<List When relying on Realm’s lazy evaluation, this isn’t really a problem: we only ever retrieve items when we access them. We don’t load the full dataset to memory. However, Google realized this poses a problem, and began working on a solution. The release of a new Architecture Component: Paging To eliminate the need for complete re-evaluation of a modified dataset each time a change occurs, Google invented something amazing: the LivePagedListProvider. Since then, the API has changed a bit, so it’s more appropriate to refer to it as the combination of DataSource.Factory and LivePagedListBuilder. With help of these classes, it’s possible to expose an observable query as a LiveData<PagedList A PagedList The way Room’s Paging integration works is that we can expose a DataSource.Factory @Dao public interface CatDao { @Query(“SELECT * FROM CATS WHERE FUR = 'BROWN'”) DataSource.Factory<Integer, Cat> getCatsWithBrownFur(); } Then we can do: private LiveData<PagedList @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); CatDao catDao = RoomDatabase.getInstance() .catDao(); // get Dao DataSource.Factory<Integer, Cat> dataSourceFactory = catDao.getCatsWithBrownFur(); cats = new LivePagedListBuilder<Cat>(dataSourceFactory, new PagedList.Config.Builder() .setPageSize(20) .setPrefetchDistance(20) .setEnablePlaceholders(20) .build()) .setInitialLoadKey(0) .build(); cats.observe(this, observer); // ← listen for initial evaluation + changes } This allows Room to expose an observable query, where the data is fetched asynchronously, but page by page, instead of loading the full dataset. This allows Room to be much faster than using regular SQLite, while retaining the benefit of observable queries. Making Realm work with Paging Realm already brings lots of benefits to the table, but many people intend to use Realm as an implementation detail, not directly. As such, they might want to keep observable queries (just like Room does), but don’t want to use managed results and managed objects: they always see the latest state of the database, which also means they mutate over time. They’re still proxies, and not regular objects. One possibility would be to read the data from Realm using realm.copyFromRealm(results). However, this method could only be called on the current thread (in this scenario, generally the UI thread), and it would read the full dataset from disk. We’d have the exact same problem as with LiveData<List We could move the copying of data from Realm to a background thread, but Realm cannot observe changes on regular threads, only on threads associated with a Looper. However, it’s possible to create a HandlerThread — if we can create a Realm instance on this handler thread, execute and observe queries on this handler thread, and keep these queries (and the Realm instance) alive while we’re observing them, then it can work! Even then, we would still have the problem of loading the full dataset on the handler thread for each change. However, not if we can make Realm’s query results be exposed through the Paging library! Monarchy: global singleton Realm with LiveData and Paging integration I’ve been working on a way to expose RealmResults in an observable manner from a handler thread, either copying the dataset, mapping the objects to a different object, or through the Paging library. The end result is called Monarchy, a library that lets you use Realm very similarly to as if you were using LiveData for a Room DAO. To create a global instance, all it needs is a RealmConfiguration, and then we can use it as a DAO implementation like this: public interface CatDao { DataSource.Factory<Integer, Cat> getCatsWithBrownFur(); } @Singleton public class CatDaoImpl implements CatDao { private final Monarchy monarchy; @Inject CatDaoImpl(Monarchy monarchy) { this.monarchy = monarchy; } @Override public DataSource.Factory<Integer, Cat> getCatsWithBrownFur() { return monarchy.createDataSourceFactory(realm -> realm.where(Cat.class) .equalTo(CatFields.FUR, “BROWN”) ); } } One strange design choice of the Paging library is that the fetch executor can only be set on the LivePagedListBuilder, and not on the DataSource.Factory. This means that to provide Monarchy’s executor that executes fetches on the handler thread, Monarchy must be used to create the final LiveData<PagedList LiveData<PagedList<Cat>> cats = monarchy.findAllPagedWithChanges( dataSourceFactory, livePagedListBuilder); On the other hand, it is still quite convenient: we receive unmanaged objects that are loaded asynchronously, page by page, and we’ve also retained the ability to receive future updates to our data when the database is written to. Another interesting tidbit is that Monarchy uses LiveData, therefore depending on whether there are active observers, it can automatically manage whether the underlying Realm should be open or closed: completely moving lifecycle management of Realm instances into the onActive/onInactive callbacks of LiveData. Conclusion If it weren’t for my time using Realm, it would probably be much harder for me to understand the driving forces that shaped the Android Architecture Components: especially Room, and its LiveData integration. The ability to listen for changes and always receive the latest state of data with minimal effort just by subscribing to an observable query is something that at first might seem foreign, but simplifies the task of “fetching data from the network, storing it locally, and also querying for and displaying data” — which is what most apps need to do. Why would we manually manage data load callbacks, if we could just run background tasks that fetch data from network and write it directly into our database, while all we need to do for the UI is observe for changes — and have any new data passed to us automatically? What if in “Clean Architecture”, fetching data isn’t even a “use-case”, but just an Effect of finding that our data is outdated and should be refreshed? What is the point of introducing Redux, if it restricts us from making subscriptions to our database queries, as the database would become a second store, and therefore would force us to make data loading be a single fetch inside “middlewares”? (Unless subscriptions are seen as a “state” that should be built up as a side-effect in the store’s observer, of course). What if the abstraction we’re trying to build binds our hands and keeps us from finding simpler solutions that solve our problem in a more efficient way? Realm’s observable queries were ahead of its time, but shaped the future of Android local data persistence. Instead of manual invalidation of data, we could just observe for changes. What else is there that we take for granted, build on top of, and keeps us from finding a better solution? Using Realm taught me that even though there’s a common way of doing things, sometimes taking a completely different approach yields much better results. I’m glad that I had the chance to try Realm, and could learn from the opportunity. Note from Realm: We liked Gabor’s article, thus we want to re-post it here with Gabor’s permission. Gabor Varadi regularly writes articles on Medium which can be found here. While Monarchy is not an official Realm library, it is an interesting complimentary library developed by Gabor. You can find it and file issues on his repo ()
https://realm.io/blog/realm-guest-post-the-things-ive-learned-using-realm-by-Gabor-Varadi/
CC-MAIN-2019-39
en
refinedweb
Getting Started with Kubernetes (at home) — Part 3 In the first two parts of this series, we looked at setting up a production Kubernetes cluster in our labs. In part three of this series, we are going to deploy some services to our cluster such as Guacamole and Keycloak. Step-by-step documentation and further service examples are here. Guacamole Guacamole is a very useful piece of software that allows you to remotely connect to your devices via RDP, SSH, or other protocols. I use it extensively to access my lab resources, even when I am at home. You can use this Helm Chart to install Guacamole on your Kubernetes cluster. The steps are as follows: - Clone the Guacamole Helm Chart from here - Apply any changes to values.yaml such as the annotations and ingress settings. - Deploy the Helm Chart from the apache-guacamole-helm-chartdirectory helm install . -f values.yaml --name=guacamole --namespace=guacamole - An ingress is automatically created by the Helm Chart, and you can access it based on the hosts:section of values.yaml Once you have deployed the Helm Chart, you should be able to access Guacamole at the ingress hostname specified in values.yaml. Keycloak Keycloak is an open-source single sign-on solution that is similar to Microsoft’s ADFS product. You can read more about Keycloak on my blog. There is a stable Keycloak Helm Chart available in the default Helm repo, which we will be using to deploy Keycloak, you can find it here. - Apply any changes to values.yaml - Deploy the helm chart stable/keycloak with values helm install --name keycloak stable/keycloak --values values.yaml - Create an ingress kubectl apply -f ingress.yaml - Get the default password for the keycloakuser. kubectl get secret --namespace default keycloak-http -o jsonpath="{.data.password}" | base64 --decode; echo Though this article is on the shorter side, hopefully it exemplifies how easy it can be to run services in Kubernetes. We mainly looked at pre-made Helm Charts in this article, however deploying a service without a Chart can also be just as easy. I prefer using charts as I find it easier to manage than straight Kubernetes manifest files. You can checkout my public Kubernetes repo at for further information and more service examples. Originally published at on May 4, 2019.
https://medium.com/@just_insane/getting-started-with-kubernetes-at-home-part-3-537b045afd1?source=---------4------------------
CC-MAIN-2019-39
en
refinedweb
Building a "complex" progress widget Hey I've been stuck on this for few days now... I'm trying to build a widget that can receive calls from main + other threads to display progress/allow user to make some decisions while keeping remaining part of the app "frozen"... I initially started with qApp()->installEvent() and gave it my own class that does filtering. That class has a setLocK() flag which enables checking for mouse/keyboard input events and then ignoring them while its locked. The class below is part of larger app, so I sadly it won't work as a standalone test... in any case it shows how far I got so far... The problem I have is that when I push updates from other threads, they never get delivered. The idea was that when large process starts, the lock becomes enabled and while lock is enabled I fire a update every 50ms to redraw/processEvents(), but that appear to never work and any queued connections don't get processed while main loop is working. I used pragma omp parallel for to do some loops and from each of these loops I fire a signal to increment value(progress) but they never display update... In any case here is the code... #ifndef NOTIFICATIONMANAGER_H #define NOTIFICATIONMANAGER_H #include "QObject" class QDialog; class QProgressBar; class QLabel; class QTextBrowser; class QTime; class QTimer; class QGraphicsOpacityEffect; class QPropertyAnimation; class NotificationManagerWorker; #include <ctime> #include <ratio> #include <chrono> namespace NotificationManagerEnums { enum notificationType { header, primary, secondary, tethary, list, }; } /*! * \class NotificationManager * * \brief NotificationManager class * */ class NotificationManager : public QObject { Q_OBJECT NotificationManager(); QWidget *mMainDialog; QProgressBar *mProgressItem; QLabel *mInfoA; QLabel *mInfoB; QLabel *mInfoC; QTextBrowser *mLog; QTimer *mTime; std::atomic<int> mValue; QGraphicsOpacityEffect *mEffectOpacity; QPropertyAnimation *mEffectAnimation; NotificationManagerWorker* mWorkerPtr; bool mWorker; std::chrono::steady_clock::time_point t1; std::chrono::steady_clock::time_point t2; void checkUpdate(); void worker(); private Q_SLOTS: void setRan(int min, int max); void setMin(int val); void setMax(int val); void setCur(int val); void setMsg(const QString &msg, int type); void showBox(const QString &title, int min = 0, int max = 100, int val = 0); void incVal(); public: static NotificationManager *NM(); ~icNotificationManager(); void showManager(); void hideDialog(); void update(); void incrementValThread(); Q_SIGNALS: void setRange(int min, int max); void setMinimum(int val); void setMaximum(int val); void setValue(int val); void setMessageTex(const QString &msg, int type = 0); void showNotificationBox(const QString &title, int min = 0, int max = 100, int val = 0); void incrementVal(); void incrementValT(); }; class NotificationManagerWorker : public QObject { Q_OBJECT friend class NotificationManager; bool mWorker; QThread *t; NotificationManager*mManager; public: NotificationManagerWorker(icNotificationManager*manager); ~icNotificationManagerWorker(); void runWorker(); }; #endif //NOTIFICATIONMANAGER_H #include <QtWidgets/QGridLayout> #include <Lib/BaseSystems/InputMonitor.h> #include <QtGui/qguiapplication.h> #include <QtConcurrent/QtConcurrent> #include "NotificationManager.h" #include "QDialog" #include "QProgressBar" #include "QLabel" #include "QTextBrowser" #include "QTime" #include "QTimer" #include "QDebug" #include "QGraphicsOpacityEffect" #include "QPropertyAnimation" #include "QThread" using namespace std::chrono; NotificationManager::NotificationManager() { mValue = 0; mMainDialog = new QWidget(); mMainDialog->setWindowFlags(Qt::Tool);// | Qt::WindowStaysOnTopHint | Qt::Window | Qt::WindowTitleHint | Qt::CustomizeWindowHint); mProgressItem = new QProgressBar(); mProgressItem->setTextVisible(true); mProgressItem->setAlignment(Qt::AlignCenter); mProgressItem->setFormat("%v / %m %p%"); mLog = new QTextBrowser(); QGridLayout *lay_main = new QGridLayout(); mMainDialog->setLayout(lay_main); mInfoA = new QLabel(""); mInfoB = new QLabel(""); mInfoC = new QLabel(""); lay_main->addWidget(mInfoA); lay_main->addWidget(mInfoB); lay_main->addWidget(mProgressItem); lay_main->addWidget(mInfoC); lay_main->addWidget(mLog); connect(this, &NotificationManager::setRange, this, &NotificationManager::setRan, Qt::DirectConnection); connect(this, &NotificationManager::setMinimum, this, &NotificationManager::setMin, Qt::DirectConnection); connect(this, &NotificationManager::setMaximum, this, &NotificationManager::setMax, Qt::DirectConnection); connect(this, &NotificationManager::setValue, this, &NotificationManager::setCur, Qt::DirectConnection); connect(this, &NotificationManager::setMessageTex, this, &NotificationManager::setMsg, Qt::DirectConnection); connect(this, &NotificationManager::showNotificationBox, this, &NotificationManager::showBox, Qt::DirectConnection); connect(this, &NotificationManager::incrementVal, this, &NotificationManager::incVal, Qt::DirectConnection); connect(this, &NotificationManager::incrementValT, this, &NotificationManager::incVal, Qt::QueuedConnection); mTime = new QTimer(mMainDialog); // failed did not work. //connect(mTime, &QTimer::timeout, []() { // qDebug() << "timer kicking?"; // if (icInputMonitor::IM()->isLock()) { // qDebug() << "timer kick"; // qApp->processEvents(); // } //});//SIGNAL(timeout()), myWidget, SLOT(showGPS())); //mTime->start(250); //time specified in ms mEffectOpacity = new QGraphicsOpacityEffect(this); //mMainDialog->setGraphicsEffect(mEffectOpacity); mEffectAnimation = new QPropertyAnimation(mEffectOpacity, "opacity"); mEffectAnimation->setDuration(10000); mEffectAnimation->setStartValue(1); mEffectAnimation->setEndValue(0); mEffectAnimation->setEasingCurve(QEasingCurve::OutBack); //connect(mEffectAnimation, &QPropertyAnimation::finished, mMainDialog, &QWidget::hide); t1 = steady_clock::now(); mWorker = true; //QtConcurrent::run(this, &NotificationManager::worker); /// failed did not work. mWorkerPtr = new NotificationManagerWorker(this); } NotificationManager *NotificationManager::NM() { static NotificationManager nm; return &nm; } NotificationManager::~NotificationManager() { } void NotificationManager::showManager() { //mMainDialog->exec(); } void NotificationManager::hideDialog() { InputMonitor::IM()->setLock(false); //mMainDialog->hide(); mEffectAnimation->setStartValue(1); //mEffectAnimation->start(QPropertyAnimation::KeepWhenStopped); // now implement a slot called hideThisWidget() to do // things like hide any background dimmer, etc. update(); qDebug() << "Closed the fucking window?"; } void NotificationManager::setRan(int min, int max) { qDebug() << "Range set :" << min << " " << max; mValue = 0; mProgressItem->setRange(min, max); } void NotificationManager::setMin(int val) { mProgressItem->setMinimum(val); } void NotificationManager::setMax(int val) { mProgressItem->setMaximum(val); } void NotificationManager::setCur(int val) { mProgressItem->setValue(val); } void NotificationManager::incVal() { mValue++; qDebug() << mValue; if (mValue >= mProgressItem->value())mProgressItem->setValue(mValue); t2 = steady_clock::now(); duration<double> time_span = duration_cast<duration<double>>(t2 - t1); if (time_span.count() > 500) { t1 = t2; mProgressItem->setValue(mValue); } } void NotificationManager::setMsg(const QString &msg, int type) { switch (type) { case NotificationManagerEnums::header: { mMainDialog->setWindowIconText(msg); break; } case NotificationManagerEnums::primary: { mInfoA->setText(msg); break; } case NotificationManagerEnums::secondary: { mInfoB->setText(msg); break; } case NotificationManagerEnums::tethary: { mInfoC->setText(msg); break; } case NotificationManagerEnums::list: { mLog->append(QTime::currentTime().toString() + " - " + msg); break; } } update(); } void NotificationManager::showBox(const QString &title, int min, int max, int val) { InputMonitor::IM()->setLock(true); mEffectOpacity->setOpacity(1.0); mValue = val; mMainDialog->setWindowTitle("Processing : " + title); mInfoA->setText(title); mProgressItem->setMinimum(min); mProgressItem->setMaximum(max); mProgressItem->setValue(val); mMainDialog->show(); } void NotificationManager::update() { mMainDialog->repaint(); qApp->processEvents(); } void NotificationManager::checkUpdate() { } void NotificationManager::incrementValThread() { Q_EMIT incrementValT(); } void NotificationManager::worker() { auto *controller = InputMonitor::IM(); while (mWorker) { QThread::msleep(10); if (controller->isLock()) { qDebug() << "I update!"; qApp->processEvents(); } } } NotificationManagerWorker::NotificationManagerWorker(NotificationManager *manager) : mWorker(true), mManager(manager) { t = new QThread(); moveToThread(t); connect(t, &QThread::started, this, &NotificationManagerWorker::runWorker); t->start(); } NotificationManagerWorker::~NotificationManagerWorker() { } void NotificationManagerWorker::runWorker() { auto *controller = InputMonitor::IM(); while (mWorker) { QThread::msleep(50); if (controller->isLock()) { mManager->update(); qDebug() << "I update!"; } } } Any help would be amazing. I'm lost with qt system... :- ( A rough example of code I would run with it : void manager::processData(vec<data> &data): for (auto &x:data){ x.doStuff(); NotificationManager::NM()->msg(x.status()); NotificationManager::NM()->incrementVal() } #pragma omp parallel for for(int x=0;x<data.size();x++){ data[x].adjust(x); NotificationManager::NM()->msg(x.status()); NotificationManager::NM()->incrementValT(); // T for threaded } } The 1st option I think it "might" work... but second threaded option never works... sigh. What todo here? As far as I can tell. The main thread is busy, because it waits for its sub threads made from pragma gets processed. Now each of those pragma threads that are processing data can also call on the messageManager and send more processing updates there... so its all threaded down the line from here.... with main thread being locked... and if I read correctly... processEvents() would not process threaded events as this only gets called by thread that calls it which is not the pragma thread but my generic worker thread that fires update every 50ms... sigh! - SGaist Lifetime Qt Champion last edited by Hi, From what you described, it seems that this manager should be built on top of Qt Model/View framework. You can have a model that stores whatever data is being pushed and then the view on top of it would update when needed. This would also allow you to easily show several progress in parallel. @sgaist Yeah long term that is the plan. But for now I'm struggling to actually display/update widgets while I process data. I know its my "design" problem as I lock my main thread, thus I can't update widget from my subThreads that the main spawns using pragma/other threads.... So as the example shows at the end of the topic... I'm kinda stuck :- ( Is there any way to "steal" main thread somehow to allow for Qt to redraw widget? The problem I have now is that I branch out to multiple sub threads and then go back to single thread as I process data. The branched out threads cant update my progress widget. - SGaist Lifetime Qt Champion last edited by Don't try to steal anything. Use signals and slots to communicate from your secondary threads back to your main thread. @sgaist said in Building a "complex" progress widget: Don't try to steal anything. Use signals and slots to communicate from your secondary threads back to your main thread. Yes, but as far as I can tell the main thread is "working" (or user has took control over it) as it spawned the secondary threads and is waiting for them to finish processing before moving on. Edit Ok so I've now run QMetaObject::invokeMethod(this,={callToLongFunction},Qt::QueuedConnection); This appear to work, however any loop that was being #pragma omp parallel for for (){} No longer threads... so qt somehow manages to break the omp multithreading ? -- well standalone teste indicates that no... all works there but my app stopped threading... sweet. Ok I'm a nub, its official! Everything works. It threads & runs job in qt internal event loop thus I don't have any issues. Wow. Ok I need to learn more on qt threading event loop :D I need to fork one for myself and have it process all my work stuff now :- )) EDIT... Ok seems like I did a full loop. I went from processing in user thread to processing in qt event loop - fine - but pragma still wont update the widget at the correct time... sigh. Since now we are in event loop, the processing of function is more important than processing of omp thread update signal... I need some kind of priority attached to signal to tell qt that hey... I know u have ur loop but process this signal now. Right now with Qt::DirectConnection while trying to update val/etc I get this > ###### WARNING: QWidget::repaint: Recursive repaint detected ((null):0, (null)) ###### ###### WARNING: QBackingStore::endPaint() called with active painter on backingstore paint device ((null):0, (null)) ###### And aparently by putting this inside omp seems to "work"... if (omp_get_thread_num() == 0) { qApp->processEvents(); } ```... ahhhh multithreading <3
https://forum.qt.io/topic/106789/building-a-complex-progress-widget
CC-MAIN-2019-39
en
refinedweb
My experience working with Jekyll This post is more than 2 years old. Yesterday I blogged about the new static-site hosting service, Surge. In order to test it, I decided to rebuild JavaScript Cookbook as a static site. (Which, to be honest, was a silly decision. Surge takes about five minutes to use. My rewrite took about five hours. ;) I decided to give Jekyll a try and I thought I'd share my thoughts about the platform. Obviously I've just built one site with it so take what I say with a grain of salt, but if you're considering setting up a static site, maybe this post will be helpful. Jekyll, like HarpJS, is run via a command line tool. Unlike Harp, Jekyll is a Ruby-based tool but you don't need to know Ruby in order to use it. I had kind of a crash course in Ruby while I worked with it, but that's only because of some of the requirements I had while building out my site. The full requirements are documented with the big red flag being that there is no Windows support. There's unofficial support, but I'd be wary of committing to Jekyll if you need to support developers on the Windows platform. Once installed, you can fire up the Jekyll server from the command line and begin working. Jekyll will automatically refresh while you work so it is quick to get up and running. Speaking of testing, the command line includes an option to create a default site, simply do jekyll new directoryname. At this point you can start typing away and testing the results in the browser. I'm assuming most of my readers are already familiar with why tools like this are cool, but in case you aren't, the point of a static site generator is to let you build sites in a similar fashion to dynamic server-side apps but with a flat, static file as the output. So as a practical matter that means I can build a template and simply use a token, like {{body}}, that will be replaced with a page's content. I can write a page and just include the relevant data for that page and when viewed in the browser it will automatically be wrapped in the template. This isn't necessarily that special - it's 101-level PHP/ColdFusion/Node stuff - but the generator tool will spit out flat HTML files that can then be hosted on things like S3, Google Cloud, or, of course, Surge. For its templates, Jekyll allows for Markdown and Liquid. It does not support Jade, because Jade is evil and smelly and shouldn't be supported anywhere. I found Liquid to be very nice. You've got your basics (variable outputting, looping, conditionals) as well as some powerful filters too. For example, this will title case a string: {{ title |{% endraw %} capitalize}}. This will do truncation: {% raw %}{{ content |{% endraw %} truncate: 200, '...' }}. You can do this with EJS templates in HarpJS as well (but I didn't know that till today!). The other big change in Jekyll is how it handles data for content. In Harp, this is separated into a file unique to a folder. In Jekyll, this is done via "front matter", basically formatted content on top of a page. Initially I preferred Harp's way, but the more I played with Jekyll the more it seemed natural to include it with the content itself. You can, if you want, also include random data files, which is cool. If you need something that isn't related to content you could abstract it out into a JSON or YAML file and make use of it in your site. Hell, you can even use CSV. As a trivial example of a Liquid file, here's a super simple page I use for thanking people after they submit content. It doesn't have anything dynamic in it at all, but the content on top tells Jekyll what template to use and passes on a title value. --- layout: default title: Thank You! --- layout: default title: Thank You! <p> Thanks for sending your content submission. I'll try to respond as soon as possible. If for some reason I don't get back to you, please feel free to drop me a line via email (raymondcamden at gmail dot com). </p> Here is a slightly more complex example, the default layout for the site. Note the use of variables and conditions for determining which tab to highlight. <!doctype html> <html lang="en"> <head> <title>{% raw %}{{page.title}}</title> <link rel="stylesheet" href="/css/bootstrap.min.css" type="text/css" /> <link rel="stylesheet" href="/css/app.css" type="text/css" /> <script src="/js/jquery-2.0.2.min.js"></script> <script src="/js/bootstrap.min.js"></script> <script src="/js/prism.js"></script> <link rel="stylesheet" href="/css/prism.css" /> <link rel="alternate" type="application/rss+xml" title="RSS" href="" /> </head> <body> <div class="container"> <div class="navbar navbar-inverse"> <div class="navbar-inner"> <div class="container" style="width: auto;"> <a class="btn btn-navbar" data- <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </a> <a class="brand" href="/">JavaScript Cookbook</a> <div class="nav-collapse"> <ul class="nav"> <li {% if page.url == '/index.html' %{% endraw %}}Home</a></li> <li {% raw %}{% if page.url == '/submit.html' %{% endraw %}}Submit</a></li> <li {% raw %}{% if page.url == '/about.html' %{% endraw %}}About</a></li> </ul> <form class="navbar-search pull-right" action="/search.html" method="get"> <input type="search" class="search-query span2" placeholder="Search" name="search"> </form> </div><!-- /.nav-collapse --> </div> </div><!-- /navbar-inner --> </div><!-- /navbar --> {% raw %}{{content}} <0863-21', 'javascriptcookbook.com'); ga('send', 'pageview'); </script> </body> </html> One of the cooler aspects of Liquid is the assign operator. Given that you have access to data about your site, a list of articles for example, you can quickly slice and dice it within your template. While Jekyll makes it easy to work with blog posts, my content was a bit different. I needed a quick way to get all my article content and sort it by the last date published. Here's how the "Latest Articles" gets generated. <h3>Latest Articles</h3> {% assign sorted = (site.pages |{% endraw %} where:"layout","article" {% raw %}| sort: 'published' | reverse) %} {% for page in sorted limit:5 %{% endraw %}} <p> <a href="{% raw %}{{page.dir}}">{{page.title}}</a> - {{page.published |{% endraw %} date: "{% raw %}%-m/%{% endraw %}-d/{% raw %}%y at %{% endraw %}I:{% raw %}%M" }} </p> {% endfor %{% endraw %}} Like I said, that assign command just makes me happy all over. So this is all well and good - but there is one killer feature of Jekyll that makes me think this may be the best tool for the job I've seen yet - plugins. Jekyll lets you create multiple additions to the server to do things like: - Create generators - code that will create new files for you - Add tags to the Liquid template system - Add filters that can be used in assign calls These plugins must be written in Ruby, but even with my absolute lack of knowledge in the language I was able to create two plugins to complete my site. Let me be clear - without these plugins I would not have been able to complete the conversion. (Well, I would have had to do a lot more work.) Let me give you a concrete example of where this helps. One of the issues you run into with static-site generators is that they require one file per URL. What I mean is - for every page of my site, from the home page, to the "About" page, to each piece of blog content, you will have one physical file. That's certainly OK. I just add a file, write my content, and I know I get the benefits of automatic layout, variable substitution, etc. But there are some cases where this requirement is a hinderance. Imagine you have N articles. Each article has a set of assigned tags. In Harp this would be defined in your data file, in Jekyll this would just be front matter. Here's a sample from one of the JSCookbook articles: layout: article title: Check if a value is an array published: 2014-10-23T21:18:46.858Z author: Maciek sourceurl: tags: [array] id: 544970b682f286f555000001 sesURL: Check-if-a-value-is-an-array moreinfo: Now imagine I want to make one page for each tag. Normally I'd have to: - Figure out all my tags. That's not necessarily a bad thing - you may only have 5-10 static tags. - Make a file each for tag, called sometag.html. - Write the code that slurps content and displays items that match that tag. - Include that code in every page. Both Harp and Jekyll support template languages that make this easy. At the end I have N pages, one for each tag. If I remove a tag, or add one, I have to remember to create a new flat file. Not the end of the world, but something you could forget. With Jekyll, I can use a plugin to create a generator. This will run on server startup and when things change, and can add new pages to the system dynamically. Here is a plugin I wrote to handle my tag issue. Keep in my mind I'm probably better at ballet dancing then Ruby. module Jekyll class TagPage < Page def initialize(site, base, dir, tag, pages) @site = site @base = base @dir = dir @name = 'index.html' #print "Running Tag page for "+tag+" "+pages.length.to_s+"\n" self.process(@name) self.read_yaml(File.join(base, '_layouts'), "tag.html") self.data['tag'] = tag self.data['title'] = tag self.data['pages'] = pages end end class TagPageGenerator < Generator safe true def generate(site) dir = "tag/" #create unique array of tags unique_tags = {} site.pages.each do {% raw %}|page| if page.data.key? 'layout' and page.data["layout"] == 'article' #print page.data #print "\n" page.data["tags"].each do |tag| if !unique_tags.include?(tag) unique_tags[tag] = [] end unique_tags[tag].push(page) end end end #print "unique tags: "+unique_tags.keys.join(",") + "\n" #create page for each unique_tags.keys.each do |tag| site.pages << TagPage.new(site, site.source, File.join(dir, tag), tag, unique_tags[tag]) end end end end And that's it! (A big thank you to Ryan Morrissey for his blog post about this - I ripped my initial code from it.) Another example of plugin support is adding your own tags. I needed a way to generate a unique list of tags for the home page. I wrote this plugin, which adds taglist to Liquid for my site. module Jekyll class TagListTag < Liquid::Tag def initialize(tag_name, text, tokens) super end def render(context) tags = [] context.registers[:site].pages.each do |page| if page.data.key?'layout' and page.data["layout"] == 'article' if page.data.key?'tags' page.data["tags"].each do |tag| if !tags.include?tag tags.push(tag) end end end end end tags = tags.sort #now output list s = "" tags.each do |tag| s += "<li><a href='/tag/" + tag + "'>" + tag + "</a></li>" end return s end end end Liquid::Template.register_tag('taglist', Jekyll::TagListTag) Again - I'm probably writing pretty crappy Ruby - but I love that I was able to extend Jekyll this way. If Harp could add this in - and let me use JavaScript - that would be killer. And that was really it. I converted my form to use FormKeep and converted the search to use a Google Custom Search Engine. You can see the final result here:. As JavaScript Cookbook has not had the traffic I'd like (hint hint - I'm still looking for content!), I'll be pointing the domain to the static version so I can have a bit less Node out there. Once I add in Grunt support and add in Surge, I can write a post and push live in 30 seconds. I can't wait. p.s. I didn't include a zip of the Jekyll version, but if anyone would like it, just ask. Archived Comments Why is your blog still heavy weighted Wordpress? A few reasons. 1) My blog is *very* large. I've got 5K+ blog entries. Jekyll may be able to handle that, I don't know, but I kinda assumed it may not. Plus, I got into Jekyll a few months after I migrated from ColdFusion to WP. 2) I like the editing experience in WP. As a person who blogs *a lot*, I wanted something that would enable me to create content quickly. Even if I'm using a local file system to write content, which is quick, I'd still have the issue of handling images. WP makes this super easy. since jekyll is a static site, the number of pages shouldn't matter. But your point about images is legit. Give something like ghost a shot maybe? The issue with Ghost is that I need more control over my layout of URLs too. For example, my blog URLs use XX for month versus X, and that impacts my Disqus comments and my Google history. All stuff I could fix manually I suppose, but at the end, I need precise control over the output. Why don't you like Jade? I don't like the template syntax. It is purely an aesthetic thing, nothing more.
https://www.raymondcamden.com/2015/03/05/my-experience-working-with-jekyll
CC-MAIN-2021-17
en
refinedweb
Hello, I need the openmv to give me the approximate height of a tennis ball after detecting it. I have trained my data and the recognition works fine, however, I’m having difficulty measuring the height. I have seen from your other forums the formula which is used to measure the height def rect_size_to_distance®: # r == (x, y, w, h) → r[3] == h return ((lens_mm * average_head_height_mm * image_height_pixels) / (r[3] * sensor_h_mm)) - offest_mm however, I’m having difficulty getting the “r” coordinates. This is how I’m recognizing the tennis ball for obj1 in tf.classify(net1, img, min_scale=1.0, scale_mul=0.1, x_overlap=0.5, y_overlap=0.5): predictions_list1 = list(zip(labels1, obj1.output())) Could someone please help with drawing a rectangle around the tennis ball and getting the height? Thank you in advance
https://forums.openmv.io/t/measuring-the-height-of-a-tennis-ball-after-detection/2191
CC-MAIN-2021-17
en
refinedweb
Do I need the servo shield if I want to control just one hitec servo like the HS-85MG? Can I just connect it directly to pins one of the OpenMV board to control it with Servo.py script? No, you can control up to 3 servos directly with the M7 camera. If you need to control more servos, you’ll need the servo shield. Hi Ibrahim, Thanks for the clarification. For my project, I just need to control one servo remotely using a web browser, but I’ll prefer not to use an additional board to control the servo via UART/I2C. However, I found this post () where you indicate that the WiFi shield and servos cannot be used at the same time. Is that still the case with the M7 camera? The M7 has one free pin when the WiFi shield is used for servos. Servo channel 3. Or P9. The M4 does not. Not that the I2C / uart serial pins are also free. Great, thanks Kwabena. One more question. Can I power the board and the servo via the USB connector using an external 5V 2A power bank? No, you need to connect to the VIN connector. If you have some way to provide power over a service connector then you can use one of the servo channels for power. Sorry Kwabena, I’m not sure I follow. The servo has 3 wires, but are you saying that I need to connect the Vin and GND to an external power source and only use the servo signal wire to control it? The regulator can supply up to 500mA, so I think it’s safe to power the cam and servo from USB power bank (connect the servo to the 3.3v output on the left header). If you want to power the servo from 5v, you’ll have to use one of these to power the cam and servo from USB: Or better you could make your own shield with a USB connector and PWM output for the servo It would be best to design the shield with the stackable headers we use so you can connect it on the back or front sides: But these will also work: If you don’t know where to start, you could re-use one of our shields: In my case, the servo requires an operating voltage range from 4.8V ~ 6.0V so this will not work for me. Instead of creating my own shield, can I just use your Servo shield with the WiFi shield to control one servo? Yes, sure you can. Just make sure to apply 5v on the VIN connector. Or, alternatively, supply 5v to one of the connectors on the servo shield. Are the connections below correct to control one(1) servo using the M7 board with the WiFi shield only? Yes! The WiFi shield doesn’t have P9 listed on it so should I connect the servo signal wire to the RST pin on the WiFi shield; which lines up with P9 on M7 board? The 5V servo jumps a tiny bit when I power up the M7 board, but after that it doesn’t respond to the pulse width I send (between 1000-2000us). from pyb import Servo servo = Servo(3) # P9 servo.pulse_width(1800) time.sleep(10) servo.pulse_width(1200) Yeah, that’s the right connection. Mmm, Ibrahim might be able to test this for you right now. Hi, This works with r2.5 from pyb import Servo import time servo = Servo(3) # P9 while (True): servo.pulse_width(1200) time.sleep(100) servo.pulse_width(1800) time.sleep(100) Thanks Ibrahim. The sleep(10) in the original servo_control.py example was too short for my servo. Your snapshot code uses sleep(100) and with that value, it works. You might want to update the servo_control.py example. Anyway, thank you both, for looking into this.
https://forums.openmv.io/t/servo-shield/391
CC-MAIN-2021-17
en
refinedweb
chore: pull data into separate namespace. Add spec refactor: extract root-view chore: update visuals for notes nesktop is an experimental playground for ideas about how to display and work with notes in a grahpical UI. A version of nesktop is available to play with - though I can't guarantee that it won't ever be broken. we're in early days I'd like to focus on things that I haven't seen implemented before, at least mostly. Some of the ideas will have been done before, but I'm unaware. Most of them I have seen talked about or played with, but haven't gone anywhere. I'll try and be clear about where things come from, at least for my own sake. The project is related to my content-addressable notes idea, the idea being one way of interacting with this repository of notes could be through a locally-run server displaying UI through your browser. Possibilities, ideas and problems are hosted on a todo tracker - feel free to raise issues if you can think of something interesting. An incomplete list of influences in no particular order
https://git.sr.ht/~sjm/nesktop
CC-MAIN-2021-17
en
refinedweb
Alpha problem with ui.ImageContext.get_image() I have modified the Sketch.py code to draw a succession of ovals instead of using path.stroke(), with the intention of using the pencil pressure to change the alpha at different points along the path. In the code below I have kept it constant for simplicity. But the alpha value does not seem to be captured correctly in ctx.get_image(), causing it to change when path_action() is called at the end of the path. Just add the code below to the Sketch.py example, and change the definition in touch_began() to self.path = MyPath() to see the problem when you lift the pencil/finger at the end of the stroke. import math def distanceBetween(point1, point2): return math.sqrt((point2[0] - point1[0])**2 + (point2[1] - point1[1])**2) def angleBetween(point1, point2): return math.atan2( point2[0] - point1[0], point2[1] - point1[1] ) class MyPath(): def move_to(self,x,y): self.path = [(x,y)] def line_to(self,x,y): self.path.append((x,y)) def stroke(self): w = 20 ui.set_alpha(0.006) lastPoint = self.path[0] for i in range(1,len(self.path)): currentPoint = self.path[i] dist = distanceBetween(lastPoint, currentPoint) angle = angleBetween(lastPoint, currentPoint) for j in range(int(dist)): x = lastPoint[0] + (math.sin(angle) * j) y = lastPoint[1] + (math.cos(angle) * j) circle = ui.Path.oval(x, y, w, w) circle.fill() lastPoint = currentPoint sure you didnt change anything else? your code works for me: I copied and pasted your code into a new script, and get the same result - when I lift my finger off at the end of a stroke, the alpha value of the whole stroke changes (gets lighter). I am running the latest version of Pythonista (3.2) using python 3.6 on a 10.5” iPad Pro with iOS 11.2
https://forum.omz-software.com/topic/4789/alpha-problem-with-ui-imagecontext-get_image/3
CC-MAIN-2021-17
en
refinedweb
Learning. Why? The other day I was adding a simple loading indicator to screens in my new app. You’ve probably used this pattern before: This is great. It’s simple and convenient for one screen. If you’re not used to React Native, ActivityIndicator is one of the beautiful parts of app development — a component that exists. When you find yourself repeating this same pattern often, its nice to see what we can do to rid ourselves from the tedious task of Copy & Paste. What do I know? I know that I’ll have a What am I apprehensive about? The strategy I will use to genuinely make my life easier (instead of harder — sometimes we need a friendly reminder). First Pass Besides the function wrapping React.PureComponent everything should look familiar. If you want to skip ahead and use this as-is, just paste this into our InfoScreen file and wrap the export: export default withLoadingScreen(InfoScreen) This pattern may look familiar if you’ve used Apollo or Redux before: export default connect(mapStateToProps, mapActionCreators)(InfoScreen) Compose If you’re wondering how to use both connect and withLoadingScreen you can wrap the components individually or use a compose function. Writing a compose function by hand is cool, but importing the one from one of the libraries you’re already using is cooler. Both redux and react-apollo come bundled with their own: Does it work? Depending on what your setup looks like, it might work perfectly or it might not work at all! 😃 🙃 I’m using react-navigation which supports a static method called navigationOptions to override header title and styles. The loading screen works as intended, but the header styles are gone! What happened? Static methods aren’t copied over! It’s super easy to overlook but also super easy to fix. There’s a library called hoist-non-react-statics that will automatically copy over static variables like navigationOptions , defaultPropsand propTypes. Second Pass: Hoisting Static Variables Instead of returning the class immediately, we return it only after calling hoistNonReactStatics . The header is rendering correctly again. Woo! Third Pass: Passing in Options Now that the loading screen itself functions correctly, I want to pass in a different loading indicator size for different screens. Sometimes the default loading indicator is too big or the wrong color so I’d like to be able to adjust when I need to. connect and graphql both give you options: connect(mapStateToProps, mapActionCreators) graphql(MyQuery, { options }) We can do the same by adding another function call: In this case I wanted to control the size of the ActivityIndicator withLoadingScreen('large')(InfoScreen)// or using compose:compose( withLoadingScreen('large'), compose(mapStateToProps, mapActionCreators) )(InfoScreen) Bonus Round: Naming Your Components You’re bound to run into issues. Sometimes the component name doesn’t show up or doesn’t match the component in question. We can fix this easily by writing a simple function called getDisplayName : function getDisplayName(WrappedComponent) { return ( WrappedComponent.displayName || WrappedComponent.name || "LoadingScreen" ); } Conclusion Higher-order components have a few weird things about them that you can fix easily. Code responsibly. Fun things to do: - Follow me on Twitter. - Check out Orchard: Stay in touch with people that matter the most and see how powerful Expo & React Native can be for building your next app.
https://medium.com/@peterpme/learning-higher-order-components-in-react-by-building-a-loading-screen-9f705b89f569
CC-MAIN-2021-17
en
refinedweb
Handles low level communication with "the other side". More... #include <transceiver.hpp> Handles low level communication with "the other side". This class handles the sending/receiving/buffering of data through the OS level sockets and also the creation/destruction of the sockets themselves. Definition at line 79 of file transceiver.hpp. Constructor. Construct a transceiver object based on an initial file descriptor to listen on and a function to pass messages on to. Definition at line 195 of file transceiver.cpp. References pollFds, socket, wakeUpFdIn, and wakeUpFdOut. Free fd/pipe and all it's associated resources. By calling this function you close the passed file descriptor and free up it's associated buffers and resources. It is safe to call this function at any time with any fd(even bad ones). If requests still exists with this fd then they will be lost. Definition at line 316 of file transceiver.cpp. Referenced by transmit(). General transceiver handler. This function is called by Manager::handler() to both transmit data passed to it from requests and relay received data back to them as a Message. The function will return true if there is nothing at all for it to do. Definition at line 64 of file transceiver.cpp. References Fastcgipp::reventsZero(), Fastcgipp::Message::size, and Fastcgipp::Message::type. Direct interface to Buffer::requestWrite() Definition at line 93 of file transceiver.hpp. References buffer, and Fastcgipp::Transceiver::Buffer::requestWrite(). Direct interface to Buffer::secureWrite() Definition at line 95 of file transceiver.hpp. References buffer, Fastcgipp::Transceiver::Buffer::secureWrite(), and transmit(). Blocks until there is data to receive or a call to wake() is made. Definition at line 106 of file transceiver.hpp. Transmit all buffered data possible. Definition at line 24 of file transceiver.cpp. References buffer, Fastcgipp::Transceiver::Buffer::empty(), freeFd(), Fastcgipp::Transceiver::Buffer::freeRead(), and Fastcgipp::Transceiver::Buffer::requestRead(). Referenced by secureWrite(). Forces a wakeup from a call to sleep() Definition at line 189 of file transceiver.cpp. Buffer for transmitting data Definition at line 258 of file transceiver.hpp. Referenced by requestWrite(), secureWrite(), and transmit(). Container associating file descriptors with their receive buffers. Definition at line 272 of file transceiver.hpp. poll() file descriptors container Definition at line 263 of file transceiver.hpp. Referenced by freeFd(), sleep(), and Transceiver(). Function to call to pass messages to requests. Definition at line 260 of file transceiver.hpp. Socket to listen for connections on. Definition at line 265 of file transceiver.hpp. Referenced by Transceiver(). Input file descriptor to the wakeup socket pair. Definition at line 267 of file transceiver.hpp. Referenced by Transceiver(). Output file descriptor to the wakeup socket pair. Definition at line 269 of file transceiver.hpp. Referenced by Transceiver().
http://www.nongnu.org/fastcgipp/doc/2.1/a00083.html
CC-MAIN-2021-10
en
refinedweb
In this notebook, we'll describe, implement, and test some simple and efficient strategies for sampling without replacement from a categorical distribution. Given a set of items indexed by $1, \ldots, n$ and weights $w_1, \ldots, w_n$, we want to sample $0 < k \le n$ elements without replacement from the set. Theory¶ The probability of the sampling without replacement scheme can be computed analytically. Let $z$ be an ordered sample without replacement from the indices $\{1, \ldots, n\}$ of size $0 < k \le n$. Borrowing Python notation, let $z_{:t}$ denote the indices up to, but not including, $t$. The probability of $z$ is $$ \mathrm{Pr}(z) = \prod_{t=1}^{k} p(z_t \mid z_{:t}) \quad\text{ where }\quad p(z_t \mid z_{:t}) = \frac{ w_{z_t} }{ W_t(z) } \quad\text{ and }\quad W_t(z) = \sum_{i=t}^n w_{z_{t}} $$ Note that $w_{z_t}$ is the weight of the $t^{\text{th}}$ item sampled in $z$ and $W_t(z)$ is the normalizing constant at time $t$. This probability is evaluated by p_perm (below), and it can be used to test that $z$ is sampled according to the correct sampling without replacement process. def p_perm(w, z): "The probability of a permutation `z` under the sampling without replacement scheme." n = len(w); k = len(z) assert 0 < k <= n wz = w[np.array(z, dtype=int)] W = wz[::-1].cumsum() return np.product(wz / W) def swor_numpy(w, R): n = len(w) p = w / w.sum() # must normalize `w` first, unlike Gumbel version U = list(range(n)) return np.array([np.random.choice(U, size=n, p=p, replace=0) for _ in range(R)]) Heap-based sampling¶ Using heap sampling, we can do the computation in $\mathcal{O}(N + K \log N)$. It's possible that shrinking the heap rather than leaving it size $n$ could yield an improvement. The implementation that I am using is from my Python arsenal. from arsenal.maths.sumheap import SumHeap def swor_heap(w, R): n = len(w) z = np.zeros((R, n), dtype=int) for r in range(R): z[r] = SumHeap(w).swor(n) return z def swor_gumbel(w, R): n = len(w) G = np.random.gumbel(0,1,size=(R,n)) G += np.log(w) G *= -1 return np.argsort(G, axis=1) Efraimidis and Spirakis (2006)'s algorithm, modified slightly to use Exponential random variates for aesthetic reasons. The Gumbel-sort and Exponential-sort algorithms are very tightly connected as I have discussed in a 2014 article and can be seen in the similarity of the code for the two methods. def swor_exp(w, R): n = len(w) E = -np.log(np.random.uniform(0,1,size=(R,n))) E /= w return np.argsort(E, axis=1) import numpy as np, pylab as pl from numpy.random import uniform from arsenal.maths import compare, random_dist R = 50_000 v = random_dist(4) methods = [ swor_numpy, swor_gumbel, swor_heap, swor_exp, ] S = {f.__name__: f(v, R) for f in methods} from collections import Counter from arsenal.maths.combinatorics import permute def counts(S): "empirical distribution over z" c = Counter() m = len(S) for s in S: c[tuple(s)] += 1 / m return c D = {name: counts(S[name]) for name in S} R = {} n = len(v) for z in permute(range(n)): R[z] = p_perm(v, z) for d in D.values(): d[z] += 0 # Check that p_perm sums to one. np.testing.assert_allclose(sum(R.values()), 1) for name, d in sorted(D.items()): compare(R, d).show(title=name); Comparison: n=24 norms: [0.336428, 0.337442] zero F1: 1 pearson: 0.999762 spearman: 0.99913 Linf: 0.00428132 same-sign: 100.0% (24/24) max rel err: 0.105585 regression: [0.995 0.000] Comparison: n=24 norms: [0.336428, 0.337414] zero F1: 1 pearson: 0.999894 spearman: 0.998261 Linf: 0.0025007 same-sign: 100.0% (24/24) max rel err: 0.118721 regression: [0.995 0.000] Comparison: n=24 norms: [0.336428, 0.336196] zero F1: 1 pearson: 0.999919 spearman: 0.997391 Linf: 0.00188318 same-sign: 100.0% (24/24) max rel err: 0.118791 regression: [1.001 -0.000] Comparison: n=24 norms: [0.336428, 0.336499] zero F1: 1 pearson: 0.999856 spearman: 0.998261 Linf: 0.00253601 same-sign: 100.0% (24/24) max rel err: 0.126029 regression: [1.000 0.000] from arsenal.timer import timers T = timers() R = 50 for i in range(1, 15): n = 2**i #print('n=', n, 'i=', i) for _ in range(R): v = random_dist(n) np.random.shuffle(methods) for f in methods: name = f.__name__ with T[name](n = n): S = f(v, R = 1) assert S.shape == (1, n) # some sort of sanity check print('done') done fig, ax = pl.subplots(ncols=2, figsize=(12, 5)) T.plot_feature('n', ax=ax[0]) fig.tight_layout() T.plot_feature('n', ax=ax[1]); ax[1].set_yscale('log'); ax[1].set_xscale('log'); T.compare() swor_exp is 1.5410x faster than swor_gumbel (median: swor_gumbel: 4.92334e-05 swor_exp: 3.19481e-05) swor_exp is 1.1082x faster than swor_heap (median: swor_heap: 3.54052e-05 swor_exp: 3.19481e-05) swor_exp is 10.4478x faster than swor_numpy (median: swor_numpy: 0.000333786 swor_exp: 3.19481e-05) Remarks: The numpy version is not very competitive. That's because it's uses a less efficient base algorithm that is not optimized for sampling without replacement. The heap-based implementation is pretty fast. It has the best asymptotic complexity if the sample size is less then $n$. That said, the heap-based sampler is harder to implement than the Exp and Gumbel algorithm, and harder to vectorize, unlike Exp and Gumbel. The difference between the Exp and Gumbel tricks is just that the Gumbel trick takes does a few more floating-point operations. In fact, as I pointed out in a 2014 article, the Exp-sort and Gumbel-sort tricks produced precisely the same sample if we use the same random seed. I suspect that the performance of both the Exp and Gumbel methods could be improved with a bit of implementation effort. For example, currently, there are some unnecessary extra temporary memory allocations. These algorithms are also trivial to parallelize. The real bottleneck is the random variate generation time.
https://timvieira.github.io/blog/post/2019/09/16/algorithms-for-sampling-without-replacement/
CC-MAIN-2021-10
en
refinedweb
Event Links: [ MiniLD 50 | Warmup Weekend | Real World Gatherings | Ludum Deals | Wallpaper ] You're a very hungry fish, who has to eat a lot to stay alive. Eat other fish to survive, and avoid the dangerous Red Fish species, your only predator. Hints: - My energy goes away too fast? How can I slow down energy use? - Is there something that can scare that pesky red fish out there? My 3rd Ludum Dare compo entry, written in C++ with SFML on Linux :). Controls: Move Fish keyboard arrows, WASD and joystick. Start: press space or any joystick button: To quit or cut screens: pres ESC Running the game: - Windows: unzip the file and run FishGoesFishing.exe - Linux: compile from source, requires SFML 2.x. - Mac OS X: compile from source, requires SFML 2.x. Sorry guys, I have no Mac, but if someone is generous to build the file and send me the binary I'll glady add it to the page. NOTE: the game screen says 2013, sorry about that, it's a "copy&paste exception. I sed the same base code for Minimum Friction, from Ludum Dare 27, and forgot to change it :(. Downloads and Links Ratings Only managed 4800 alas. Fairly straightforward game, well presented. Pretty decent though! Thanks Lucariatias, but all you need to do is run "make" to compile using the included Makefile. Also, it requires SFML 2, which you can download at. Pretty decent game, very nostalgic of the MS-DOS days. was a nice little game, but I thought it needed something more. The graphics could have been better, and extra features could have been added! maybe a growing system, or multiple enemies and other ways to survive! was nice though Solid game, nice sounds, good graphics. I kept forgetting the red fish were bad and that there wasn't a size growing element to this game, but that's my fault. Overall a fine game, really enjoy the music and sound. Why does it say 2013 on the title screen? I was unable to compile your game despite installing SFML. But if it's any consolation, I took your advice and made windows and linux binaries of my game as you suggested. Hi MrBlade, are we already on 2014? Whoops, gotta change my base code to make it up to date. LOL :) Thanks for the feedback cheesepencil. Which error did you got? Notice that some Linux distros offer SFML1.6 (Ubuntu 12.X and derivatives, for instance), which is dated. The game requires SFML 2.X, which you can download the binary áckages from sfml=dev.org. Was that the case? Very Cool! @pvwradtke Yes, I think so. I tried the dev package from my distro (mint) and that didn't work (it's probably old). i tried dumping the contents of the 2.x download from the SFML site into the game's source folder in a couple different locations and that didn't work either. I'm not inclined to install the lib without a package manager since removing stuff like that after I'm done with it has always been an inconvenient (if not painful) process. I'll probably end up playing the binary in windows tomorrow as a matter of convenience - there are SO MANY GAMES TO RATE and it's just easier to play the binaries on win. Hi cheesepencil, I know how it goes, thanks for trying anyway. The Windows binary works fine under Wine (actually, I cross compiled the Windows binary with Mingw32 on Arch). If Wine acts up with your video card driver, you may try running it with the -w switch, that forces a windowed mode. Nice. Very simple but well done. Actually I would have liked to eat some of those red fishes. They looked delicious. Especially when running out of energy :D Good job! The music and the game atmosphere itself it's so relaxing, and makes this a good game overall ;) Nice game overall :) Green fish have exact same movespeed which makes them a bit frustrating to chase.. and graphics could be a bit more polished but other than that nice job! Was nice to see the red fish reacting to being lured into the bigger beings without reading the hints :-). Compiling on OS X gives: zsh: no such file or directory: ./configure g++ -std=c++11 -fpermissive -c main.cpp In file included from main.cpp:11: ./Game.hpp:12:10: fatal error: 'SFML/Audio.hpp' file not found #include <SFML/Audio.hpp> ^ 1 error generated. make: *** [main.o] Error 1
http://ludumdare.com/compo/ludum-dare-29/comment-page-5/?action=preview&uid=3728
CC-MAIN-2021-10
en
refinedweb
Creating and publishing a Greasy Fork script I've been using a couple of GreasyFork scripts with Tampermonkey Firefox extension for a while. Only recently has a missing feature on a web page bothered me enough to consider creating a user script myself. The community-maintained documentation is rather brief and not updated too often. But since user scripts are mostly just additional JavaScript running in the context of the web page, there isn't much to learn that's specific to user scripts. The most important part is the script header with meta keys describing the script and specifying when it's going to be activated: // ==UserScript== // @name Copy TrueGaming achievement list // @namespace // @version 1.0.0 // @description Copies the achievements/trophies from the True Achievements... // @author Damir Arh // @license MIT // @supportURL // @match*/achievements* // @grant none // ==/UserScript== In my case, most of the code was inspecting the DOM using the querySelector and querySelectorAll methods: const filterDropdownTitleElement = document.querySelector( "#btnFlagFilter_Options .title" ); I used unobtrusive JavaScript principles to add interaction to the page, i.e. I handle events using the addEventListener method: const copyAnchor = document.createElement("a"); copyAnchor.setAttribute("href", "#"); copyAnchor.setAttribute("title", iconLabel); copyAnchor.style.marginLeft = "0"; copyAnchor.appendChild(copyIcon); copyAnchor.addEventListener("click", copyToClipboard); filterDropdownTitleElement.appendChild(copyAnchor); A basic development environment is provided by the Tampermonkey extension. You can open it via the Add a new script... menu command: When you save the script for the first time, it appears in its Installed userscripts list. You can open it in the built-in editor from there to modify it at a later time: Although the editor is decent, I prefer Visual Studio Code for writing JavaScript. Even without TypeScript types, it can provide great code completion and help for methods used. This makes writing code a lot easier. And it even helped me notice a couple of mistakes in my code without running it. Admittedly, this approach adds some overhead to testing the script. Instead of simply saving it, I have to copy and paste it to the Tampermonkey editor and save it there. I found a Stackoverflow answer with instructions for loading the script in Tampermonkey directly from the file system but I couldn't get it to work. Despite that, for me the benefits of using Visual Studio Code overweigh the inconvenience of having to copy the script to test it. Publishing a script on GreasyFork is just a matter of filling out a simple form. If you host your script on GitHub (or another Git host), the process is even simpler as you can import it from there. To automate updates, you can even set up a web hook. You can check the script I've written in my GitHub repository. Of course, the code is very specific to the problem I was solving. Still, the general approach will be useful in other scenarios as well. The key takeaway is that it isn't difficult to write a user script if you have some basic knowledge of writing JavaScript for the browser. It's much simpler to do than writing a standalone browser extension.
https://www.damirscorner.com/blog/posts/20210129-CreatingAndPublishingAGreasyForkScript.html
CC-MAIN-2021-10
en
refinedweb