text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
#include <itkImageSliceIteratorWithIndex.h> A multi-dimensional image iterator that extends the ImageLinearIteratorWithIndex from iteration along lines in an image to iteration along both lines and planes (slices) within an image. A slice is defined as a 2D plane spanned by two vectors pointing along orthogonal coordinate axes. Most of the functionality is inherited from the ImageSliceConstIteratorWithIndex. The current class only adds write access to image pixels. See ImageSliceConstIteratorWithIndex for details. Definition at line 69 of file itkImageSlice 73 of file itkImageSliceIteratorWithIndex.h. Definition at line 109 of file itkImageConstIteratorWithIndex.h. Definition at line 74 of file itkImageSliceIteratorWithIndex.h. Default constructor. Needed since we provide a cast constructor. Constructor establishes an iterator to walk a particular image and a particular region of that image. Constructor that can be used to cast from an ImageIterator to an ImageSliceIteratorWithIndex. Many routines return an ImageIterator, but for a particular task, you may want an ImageSliceIteratorWithIndex. Rather than provide overloaded APIs that return different types of Iterators, itk returns ImageIterators and uses constructors to cast from an ImageIterator to a ImageSliceIteratorWithIndex. The construction from a const iterator is declared protected in order to enforce const correctness. The construction from a const iterator is declared protected in order to enforce const correctness. Set the pixel value Definition at line 105 of file itkImageSliceIteratorWithIndex.h. Return a reference to the pixel. This method will provide the fastest access to pixel data, but it will NOT support ImageAdaptors. Definition at line 114 of file itkImageSliceIteratorWithIndex.h.
https://itk.org/Doxygen/html/classitk_1_1ImageSliceIteratorWithIndex.html
CC-MAIN-2021-43
en
refinedweb
Leverage uWSGI spooler and cron in Django Project description Requirements Upgrade from v0.3 to v0.4 As of v0.4, djcall uses a PostgreSQL JSON field instead of a Picklefield for Caller.kwargs, which means that unless you have only JSON serializable contents in your djcall_caller.kwargs columns: the migration will fail, so will it if you don’t run PostgreSQL. Sorry, but it became too much of an annoyance not to be able to query of Call kwargs. Anyway, a migration should take care of this for you. It leaves the old Picklefield renamed from kwargs to old_kwargs until next release where it will be dropped. Install pip install djcall Add djcall to INSTALLED_APPS and migrate. Usage from djcall.models import Caller Caller( # path to python callback callback='djblockchain.tezos.transaction_watch', # JSON serializable kwargs kwargs=dict( pk=transaction.pk, ), ).spool('blockchain') # optionnal spooler name No decorator, no nothing, If you have CRUDLFA+ or django.contrib.admin, you should see the jobs there, and be able to cancel them. Example project Setup example project: djcall-example collectstatic djcall-example migrate djcall-example createsuperuser Run with runserver: djcall-example runserver Or with uWSGI: uwsgi --env DJANGO_SETTINGS_MODULE=djcall_example.settings --env DEBUG=1 --spooler=/spooler/blockchain --spooler=/spooler/mail --spooler-processes 1 --http=:8000 --plugin=python --module=djcall_example.wsgi:application --honour-stdin --static-map /static=static History First made a dead simple pure python generic spooler for uwsgi: The made a first implementation including CRUDLFA+ support: This version adds: - Cron model and support for uWSGI cron, can’t add/remove them without restart, but can change kwargs and options online - CRUDLA+ support is on hold waiting for what’s currently in because i don’t want to build crud support here with templates because of the debt this will add, it’s time to use components in CRUDLFA+ to make the CRUD for Cron/Background tasks awesome Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/djcall/
CC-MAIN-2021-43
en
refinedweb
As the title states I am trying to use the previous rank to filter out the current Here’s an example of my starting df df = pd.DataFrame({ 'rank': [1, 1, 2, 2, 3, 3], 'x': [0, 3, 0, 3, 4, 2], 'y': [0, 4, 0, 4, 5, 5], 'z': [1, 3, 1.2, 2.95, 3, 6], }) print(df) # rank x y z # 0 1 0 0 1.00 # 1 1 3 4 3.00 # 2 2 0 0 1.20 # 3 2 3 4 2.95 # 4 3 4 5 3.00 # 5 3 2 5 6.00 Here’s what I want the output to be output = pd.DataFrame({ 'rank': [1, 1, 2, 3], 'x': [0, 3, 0, 2], 'y': [0, 4, 0, 5], 'z': [1, 3, 1.2, 6], }) print(output) # rank x y z # 0 1 0 0 1.0 # 1 1 3 4 3.0 # 2 2 0 0 1.2 # 5 3 2 5 6.00 basically what I want to happen is if the previous rank has any rows with x, y (+- 1 both ways) AND z (+- .1) to remove it. So for the rows rank 1 ANY rows in rank 2 that have any combo of x = (-1-1), y = (-1-1), z= (.9-1.1) OR x = (2-5), y = (3-5), z= (2.9-3.1) I want it to be removed Thanks for all help in advance! Answer This is a bit tricky as your need to access the previous group. You can compute the groups using groupby first, and then iterate over the elements and perform your check with a custom function: def check_previous_group(rank, d, groups): if not rank-1 in groups.groups: # check is a previous group exists, else flag all rows False (i.e. not to be dropped) return pd.Series(False, index=d1.index) else: # get previous group (rank-1) d_prev = groups.get_group(rank-1) # get the absolute difference per row with the whole dataset # of the previous group: abs(d_prev-s) # if all differences are within 1/1/0.1 for x/y/z # for at least one rows of the previous group # then flag the row to be dropped (True) return d.apply(lambda s: abs(d_prev-s)[['x', 'y', 'z']].le([1,1,0.1]).all(1).any(), axis=1) groups = df.groupby('rank') mask = pd.concat([check_previous_group(rank, d, groups) for rank,d in groups]) df[~mask] output: rank x y z 0 1 0 0 1.0 1 1 3 4 3.0 2 2 0 0 1.2 5 3 2 5 6.0
https://www.tutorialguruji.com/python/pandas-using-the-previous-rank-values-to-filter-out-current-row/
CC-MAIN-2021-43
en
refinedweb
Use your singletons wisely Know when to use singletons, and when to leave them behind. public class MyTestCase extends TestCase { ... public void testBThrowsException() { MockB b = new MockB(); b.throwExceptionFromMethodC(NoSuchElementException.class); A a = new A(b); // Pass in the mock version try { a.doSomethingThatCallsMethodC(); } catch (NoSuchElementException success) { // Check exception parameters? } } ... }. Singletons know too much public class Deployment { ... public void deploy(File targetFile) { Deployer.getInstance().deploy(this, targetFile); } ... } public class Deployment { private Deployer deployer; public Deployment(Deployer aDeployer) { deployer = aDeployer; } public void deploy(File targetFile) { deployer.deploy(this, targetFile); } ... } public class DeploymentTestCase extends TestCase { ... public void testTargetFileDoesNotExist() { MockDeployer deployer = new MockDeployer(); deployer.doNotFindAnyFiles(); try { Deployment deployment = new Deployment(deployer); deployment.deploy(new File("validLocation")); } catch (FileNotFoundException success) { } } ... } public class MyApplicationToolbox { private static MyApplicationToolbox instance; public static MyApplicationToolbox getInstance() { if (instance == null) { instance = new MyApplicationToolbox(); } return instance; } protected MyApplicationToolbox() { initialize(); } protected void initialize() { // Your code here } private AnyComponent anyComponent; public AnyComponent getAnyComponent() { return anyComponent(); } ... // Optional: standard extension allowing // runtime registration of global objects. private Map components; public Object getComponent(String componentName) { return components.get(componentName); } public void registerComponent(String componentName, Object component) { components.put(componentName, component); } public void deregisterComponent(String componentName) { components.remove(componentName); } }. Downloadable resources Related topics -.
https://www.ibm.com/developerworks/library/co-single/
CC-MAIN-2020-29
en
refinedweb
Here is My answer: public class ArrayProgram{ public static void main(String[]args){ for(int x=1;x<400;x++){int multiArray[]={13*x}; if(multArray[0]<400) {System.out.println(multArray[0]);} else{System.out.println("stop");break;} } } }. Lastly please I need a pal to hook up with in Java and python. I am a Nigerian self-taught newbie programmer . Here is my whatsapp number +2348031394935 thanks. Sent from my TECNO_P5_PLUS using Tapatalk This post has been edited by andrewsw: 28 December 2016 - 03:22 AM Reason for edit:: added missing [code][/code] tags
https://www.dreamincode.net/forums/topic/400248-i-need-help-with-this-array-question/
CC-MAIN-2020-29
en
refinedweb
The Scanner class was introduced in Java 5. The reset() method was added in Java 6, and a couple of new constructors were added in Java 7 for interoperability with the (then) new Path interface. Scanner scanner = new Scanner(System.in); //Scanner obj to read System input String inputTaken = new String(); while (true) { String input = scanner.nextLine(); // reading one line of input if (input.matches("\\s+")) // if it matches spaces/tabs, stop reading break; inputTaken += input + " "; } System.out.println(inputTaken); The scanner object is initialized to read input from keyboard. So for the below input from keyboar, it'll produce the output as Reading from keyboard Reading from keyboard //space Scanner scanner = null; try { scanner = new Scanner(new File("Names.txt")); while (scanner.hasNext()) { System.out.println(scanner.nextLine()); } } catch (Exception e) { System.err.println("Exception occurred!"); } finally { if (scanner != null) scanner.close(); } Here a Scanner object is created by passing a File object containing the name of a text file as input. This text file will be opened by the File object and read in by the scanner object in the following lines. scanner.hasNext() will check to see if there is a next line of data in the text file. Combining that with a while loop will allow you to iterate through every line of data in the Names.txt file. To retrieve the data itself, we can use methods such as nextLine(), nextInt(), nextBoolean(), etc. In the example above, scanner.nextLine()is used. nextLine() refers to the following line in a text file, and combining it with a scanner object allows you to print the contents of the line. To close a scanner object, you would use Using try with resources (from Java 7 onwards), the above mentioned code can be written elegantly as below. try (Scanner scanner = new Scanner(new File("Names.txt"))) { while (scanner.hasNext()) { System.out.println(scanner.nextLine()); } } catch (Exception e) { System.err.println("Exception occurred!"); } You can use Scanner to read all of the text in the input as a String, by using \Z (entire input) as the delimiter. For example, this can be used to read all text in a text file in one line: String content = new Scanner(new File("filename")).useDelimiter("\\Z").next(); System.out.println(content); Remember that you'll have to close the Scanner, as well as catch the IoException this may throw, as described in the example Reading file input using Scanner. You can use custom delimiters (regular expressions) with Scanner, with .useDelimiter(","), to determine how the input is read. This works similarly to String.split(...). For example, you can use Scanner to read from a list of comma separated values in a String: Scanner scanner = null; try{ scanner = new Scanner("i,like,unicorns").useDelimiter(",");; while(scanner.hasNext()){ System.out.println(scanner.next()); } }catch(Exception e){ e.printStackTrace(); }finally{ if (scanner != null) scanner.close(); } This will allow you to read every element in the input individually. Note that you should not use this to parse CSV data, instead, use a proper CSV parser library, see CSV parser for Java for other possibilities. The following is how to properly use the java.util.Scanner class to interactively read user input from System.in correctly( sometimes referred to as stdin, especially in C, C++ and other languages as well as in Unix and Linux). It idiomatically demonstrates the most common things that are requested to be done. package com.stackoverflow.scanner; import javax.annotation.Nonnull; import java.math.BigInteger; import java.net.MalformedURLException; import java.net.URL; import java.util.*; import java.util.regex.Pattern; import static java.lang.String.format; public class ScannerExample { private static final Set<String> EXIT_COMMANDS; private static final Set<String> HELP_COMMANDS; private static final Pattern DATE_PATTERN; private static final String HELP_MESSAGE; static { final SortedSet<String> ecmds = new TreeSet<String>(String.CASE_INSENSITIVE_ORDER); ecmds.addAll(Arrays.asList("exit", "done", "quit", "end", "fino")); EXIT_COMMANDS = Collections.unmodifiableSortedSet(ecmds); final SortedSet<String> hcmds = new TreeSet<String>(String.CASE_INSENSITIVE_ORDER); hcmds.addAll(Arrays.asList("help", "helpi", "?")); HELP_COMMANDS = Collections.unmodifiableSet(hcmds); DATE_PATTERN = Pattern.compile("\\d{4}([-\\/])\\d{2}\\1\\d{2}"); // HELP_MESSAGE = format("Please enter some data or enter one of the following commands to exit %s", EXIT_COMMANDS); } /** * Using exceptions to control execution flow is always bad. * That is why this is encapsulated in a method, this is done this * way specifically so as not to introduce any external libraries * so that this is a completely self contained example. * @param s possible url * @return true if s represents a valid url, false otherwise */ private static boolean isValidURL(@Nonnull final String s) { try { new URL(s); return true; } catch (final MalformedURLException e) { return false; } } private static void output(@Nonnull final String format, @Nonnull final Object... args) { System.out.println(format(format, args)); } public static void main(final String[] args) { final Scanner sis = new Scanner(System.in); output(HELP_MESSAGE); while (sis.hasNext()) { if (sis.hasNextInt()) { final int next = sis.nextInt(); output("You entered an Integer = %d", next); } else if (sis.hasNextLong()) { final long next = sis.nextLong(); output("You entered a Long = %d", next); } else if (sis.hasNextDouble()) { final double next = sis.nextDouble(); output("You entered a Double = %f", next); } else if (sis.hasNext("\\d+")) { final BigInteger next = sis.nextBigInteger(); output("You entered a BigInteger = %s", next); } else if (sis.hasNextBoolean()) { final boolean next = sis.nextBoolean(); output("You entered a Boolean representation = %s", next); } else if (sis.hasNext(DATE_PATTERN)) { final String next = sis.next(DATE_PATTERN); output("You entered a Date representation = %s", next); } else // unclassified { final String next = sis.next(); if (isValidURL(next)) { output("You entered a valid URL = %s", next); } else { if (EXIT_COMMANDS.contains(next)) { output("Exit command %s issued, exiting!", next); break; } else if (HELP_COMMANDS.contains(next)) { output(HELP_MESSAGE); } else { output("You entered an unclassified String = %s", next); } } } } /* This will close the underlying Readable, in this case System.in, and free those resources. You will not be to read from System.in anymore after this you call .close(). If you wanted to use System.in for something else, then don't close the Scanner. */ sis.close(); System.exit(0); } } import java.util.Scanner; Scanner s = new Scanner(System.in); int number = s.nextInt(); If you want to read an int from the command line, just use this snippet. First of all, you have to create a Scanner object, that listens to System.in, which is by default the Command Line, when you start the program from the command line. After that, with the help of the Scanner object, you read the first int that the user passes into the command line and store it in the variable number. Now you can do whatever you want with that stored int. it can happen that you use a scanner with the System.in as parameter for the constructor, then you need to be aware that closing the scanner will close the InputStream too giving as next that every try to read the input on that (Or any other scanner object) will throw an java.util.NoSuchElementException or an java.lang.IllegalStateException example: Scanner sc1 = new Scanner(System.in); Scanner sc2 = new Scanner(System.in); int x1 = sc1.nextInt(); sc1.close(); // java.util.NoSuchElementException int x2 = sc2.nextInt(); // java.lang.IllegalStateException x2 = sc1.nextInt();
https://sodocumentation.net/java/topic/551/scanner
CC-MAIN-2020-29
en
refinedweb
Django Middleware to allow Googlebot access to paywalled, or login only content. This Django middleware automatically logs in Googlebot as the googlebot user. Finally, create a googlebot user. This account will be used when Googlebot is automatically logged in. If you don't create an account, then one will be created automatically. UADetector is a library to identify over 190 different desktop and mobile browsers and 130 other User-Agents like feed readers, email clients and multimedia players. In addition, even more than 400 robots like BingBot, Googlebot or Yahoo Bot can be identified. Django Simple Captcha is an extremely simple, yet highly customizable Django application to add captcha images to any Django form. The current development version supports Python3 via the six compatibility layer. You will need to install Pillow because PIL doesn't support Python3 yet.django captcha django-role-permissions is a django app for role based permissions. It's built on top of django contrib.auth user Group and Permission functionalities and it does not add any other models to your project. django-role-permissions supports Django versions from 1.5 till the latest.django permissions roles. ! These docs pertain to the upcoming 1.0 release. Current docs can be found here.django django-rest-framework If you have issues with the "django-registration-redux" package then please raise them here. This is a fairly simple user-registration application for Django, designed to make allowing user signups as painless as possible. It requires a functional installation of Django 1.11 or newer, but has no other dependencies.django user-registration. django-environ allows you to use Twelve-factor methodology to configure your Django application with environment variables. See the similar code, sans django-environ.django twelve-factor settings. django-timepiece is a multi-user application for tracking people's time on projects. Documentation is available on Read The Docs.. This app was formerly called django-stripe-payments and has been renamed to avoid namespace collisions and to have more consistency with Pinax. Pinax is an open-source platform built on the Django Web Framework. It is an ecosystem of reusable Django apps and starter project templates. This collection can be found at django payments stripe-api django-apps pinax subscriptions saas We have large collection of open source products. Follow the tags from Tag Cloud >> Open source products are scattered around the web. Please provide information about the open source projects you own / you use. Add Projects.
https://www.findbestopensource.com/product/macropin-django-googlebot
CC-MAIN-2020-29
en
refinedweb
Since we are using logcat as a console for android. There are cases when the the output text/msg is kinda big and I can’t see the complete output. The log cat shows only the starting part of it. Is there a way to expand it so that I can see the full msg? If you want to write long messages to see in logcat it may be worth writing your own wrapper around the android.util.Log methods which splits your long message over multiple lines. ### This is the way I solved the problem. Hope it helps. The important method for using it inside your code is splitAndLog. public class Utils { /** * Divides a string into chunks of a given character size. * * @param text String text to be sliced * @param sliceSize int Number of characters * @return ArrayList<String> Chunks of strings */ public static ArrayList<String> splitString(String text, int sliceSize) { ArrayList<String> textList = new ArrayList<String>(); String aux; int left = -1, right = 0; int charsLeft = text.length(); while (charsLeft != 0) { left = right; if (charsLeft >= sliceSize) { right += sliceSize; charsLeft -= sliceSize; } else { right = text.length(); aux = text.substring(left, right); charsLeft = 0; } aux = text.substring(left, right); textList.add(aux); } return textList; } /** * Divides a string into chunks. * * @param text String text to be sliced * @return ArrayList<String> */ public static ArrayList<String> splitString(String text) { return splitString(text, 80); } /** * Divides the string into chunks for displaying them * into the Eclipse's LogCat. * * @param text The text to be split and shown in LogCat * @param tag The tag in which it will be shown. */ public static void splitAndLog(String tag, String text) { ArrayList<String> messageList = Utils.splitString(text); for (String message : messageList) { Log.d(tag, message); } } } ### I never use the GUI to view logcat output, so I’m not sure where/whether there are scrollbars in the DDMS/Eclipse UI. Anyway, you can use logcat from the command line — there are loads of options. To watch the log of an active device continually: adb logcat To dump the whole log: adb logcat -d To dump the whole log to a file: adb logcat -d > log.txt To filter and display a particular log tag: adb logcat -s MyLogTag …and much more! ### Of course, you can change the column width, just by going to the end of the line clicking and dragging. That is a pain for really long messages. If I have a really long message, I generally copy the line and paste it into a text file. Ctrl-C in Windows will copy it. ### To add to Jay Askren’s answer, you can also double-click on the right-edge of the “text” column header to expand it fully. I’ve noticed that even so there’s a limit to the number of chars Eclipse will display.
https://throwexceptions.com/android-how-can-i-see-long-texts-msg-in-logcat-throwexceptions.html
CC-MAIN-2020-29
en
refinedweb
#define ALLEGRO_UNSTABLE #include <allegro5/allegro.h> This is a really, really good idea. When you think about it, the implementation is the same as DEPRECATED warnings. Allegro Game Videos This is a really, really good idea.. When you think about it, the implementation is the same as DEPRECATED warnings. I do want a ALLEGRO_NO_DEPRECATED as well .. This'd just affect the headers. I'm thinking ALLEGRO_FUNC and ALLEGRO_UNSTABLE_FUNC. Either way, there wouldn't be a 'no unstable features' CMake option or anything. "For in much wisdom is much grief: and he that increases knowledge increases sorrow."-Ecclesiastes 1:18[SiegeLord's Abode][Codes]:[DAllegro5]:[RustAllegro] How do you expect to handle ABI compatibility? Especially when "unstable" things are removed or renamed? -- know yet... maybe all of this is impossible . I thought it'd might be possible to somehow split the unstable features into a separate optional library... Not sure if that is possible if they share the same source file. Maybe some clever linker scripts? For C a stable ABI can be achieved by not changing the parameters to functions and only adding new members to structs, not removing or reordering them. That means once a function is declared to be a stable API, it can't change it's parameters anymore nor may it be removed. But with some #define preprocessor tricks this can be remedied easily enough. It would very well be possible to do this with what SiegeLord proposes. How do you expect to handle ABI compatibility? Especially when "unstable" things are removed or renamed? I'm not a dev, so maybe you're talking about internally. But if you mean externally, I see no reason people should expect an "unstable" function never to change. The same way between the existing two branches. [edit] But it looks like you're talking about internals. What's the problem this introduces? If you've got, say, a DLL, and you add a new function, do all the other functions get moved around and broken? Surely dynamic libraries work on a function name level and not a byte offset? I guess in the case of a compiled program, but the DLL ABI changes, "function name exists but was changed" or "you are calling a depreciated unstable function". But couldn't we introduce some boilerplate for those rare cases, where we have a list of functions that immediately flag a critical-level allegro error that says, "This program is calling a depreciated function. There has likely been a DLL version mismatch." Additionally, since we'd be forcing people to use ALLEGRO_UNSTABLE, it's the developer's fault if they make the assumption that functions will never change and doesn't statically link, or provide updates. [edit] Doing some reading. This seems like a fairly good attempt at C++ ABI, if you're wanting something to read for fun. Basically, they use extern "abi" {}, and the OS (not the compiler) defines how that section must be implemented. And to deal with std::library issues (std::vector is a huge issue that can't even be used in public library functions), they introduce std::abi::library functions which are binary stable. -----sig:“Programs should be written for people to read, and only incidentally for machines to execute.” - Structure and Interpretation of Computer Programs"Political Correctness is fascism disguised as manners" --George Carlin But it looks like you're talking about internals. What's the problem this introduces? I was talking more about the dll's or so's themselves. But it doesn't seem to be a problem. At most we'll have to worry about public types that are allocated statically (ie: ALLEGRO_COLOR, but why would that change all that often?) @Chris Katco: That's an unimplemented proposal for C++ which is a huge mess when it comes to a stable ABI due to naùe mangling. Name mangling of C++ is the reason why there is ,no stable ABI and slower dll loading with c++. Plain C doesn't have any of those problems to keep the ABI stable when you stick to a few basic rules. Another reason why plain C is better. That's an unimplemented proposal for C++ which is a huge mess when it comes to a stable ABI due to naùe mangling. Yes, yes. But a man can dream, can't he? So I checked Linux GCC, Windows GCC (via MinGW) and MSVC, and they all support the general ABI idea I was going for. Basically, if your program doesn't use a function from a shared library, then the next version of the library can safely remove that function without breaking ABI compatibility. So if you compile your code without ALLEGRO_UNSTABLE in e.g. 5.2.0, then your code will continue to work with DLLs produced for any later 5.2.x version. If you do use ALLEGRO_UNSTABLE, then your program is only guaranteed to work with the very same version your compiled for (i.e. 5.2.0 in this example). This is the same ABI guarantee we currently provide with the current dual branch arrangement. Code compiled with 5.0.x works with shared libraries with version 5.0.y where y >= x. Code compiled with 5.1.x works only with shared libraries with the same version. It occurs to me that we could actually provide an option to build Allegro without any visible unstable entries, for things like Debian which might not like these unstable ABI breakages. A cursory search reveals that gstreamer has done this for years. A cursory search reveals that gstreamer has done this for years. Hah, great. Although I still prefer my opt-in approach more than silencing a warning. I get the impression many people routinely turn a blind eye to warnings . Yeah, I get that impression, too. Besides, opting-in feels more appropriate. Re #allegro: Apparently, __declspec(deprecated) is the deprecated macro for MSVC. I don't have MSVC, so I can't test it. Edit: Hehe, I think you may have already found the Stack Overflow post...
https://www.allegro.cc/forums/thread/614985/1010015
CC-MAIN-2020-29
en
refinedweb
Sometimes, you require distinguishing the data plotted by the series in the chart. The need arises when the data points of the series do not fall in the same range. In other words, the Y axis of the series contain values in different ranges. For instance, there could be two series. The Y values for one might lie between 0 and 100 and that for the other between 0 and -100. In addition, the data of the series could require different scales altogether. In such cases, displaying the Y values of the series on a single Y-axis can confuse the interpretation of the data and overlap the same as well. FlexChart automatically scales the axes to avoid this situation and represents data in a better way. The GIF below displays axis scaling of different data series in FlexChart. The code snippet below can be used to bind and add two series in FlexChart. Using the smiliar code, you can add more series to the FlexChart as shown in the above GIF. private void Window_Loaded(object sender, RoutedEventArgs e) { flexChart.ChartType = C1.Chart.ChartType.LineSymbols; flexChart.Binding = "Y"; flexChart.BindingX = "X"; dataObj = new DataSource(); // adds and binds two series with data having different value(Y) scales flexChart.Series.Add(new C1.WPF.Chart.Series()); flexChart.Series[0].ItemsSource = dataObj.FirstData; flexChart.Series.Add(new C1.WPF.Chart.Series()); flexChart.Series[1].ItemsSource = dataObj.SecondData; } public class DataSource { private Random rnd = new Random(); public List<Point> FirstData { get { var data = new List<Point>(); for (int i = 1; i <= 50; i++) { data.Add(new Point(i, rnd.Next(10, 100))); } return data; } } public List<Point> SecondData { get { var data = new List<Point>(); for (int i = 1; i <= 50; i++) { data.Add(new Point(i, rnd.Next(200, 300))); } return data; } } }
https://www.grapecity.com/componentone/docs/wpf/online-flexchart/AxesScaling.html
CC-MAIN-2020-29
en
refinedweb
MPI communication Since MPI processes are independent, in order to coordinate work, they need to communicate by explicitly sending and receiving messages. There are two types of communication in MPI: point-to-point communication and collective communication. In point-to-point communication messages are sent between two processes, whereas a collective communication involves a number of processes at the same time. Collective communication will be discussed in more detail later, but let us focus now on sending and receiving data between two processes. Point-to-point communication In a nutshell, in point-to-point communication one process sends a message (some data) to another process that receives it. The important thing to remember is that the sends and receives in a program have to match: one receive per one send. In addition to matching each send call with a corresponding receive call, one needs to pay particular attention to match also the destination and source ranks for the communication. A message is always sent to given process (destination rank) and, similarly, received from a given process (source rank). One can think of the destination and source ranks as the addresses for the messages, i.e. “please send the message to this address” and “is there a message coming from this address?”. Example: Sending and receiving a dictionary from mpi4py import MPI comm = MPI.COMM_WORLD rank = comm.Get_rank() if rank == 0: data = {'a': 7, 'b': 3.14} comm.send(data, dest=1) elif rank == 1: data = comm.recv(source=0) Sending and receiving data Python objects can be communicated with the send() and recv() methods of a communicator. It works for any Python object that can be serialised into a byte stream, i.e. any object that can be pickled. This includes all standard Python objects and most derived ones as well. The basic interfaces (check mpi4py documentation for optional arguments) of the methods are: .send(data, dest) data: Python object to send dest: destination rank .recv(source) source: source rank - note: data is provided as return value The normal send and receive routines are blocking, i.e. the functions exit only once it is safe to use the data (memory) involved in the communication. This means that the completion depends on the other process and that there is a risk of a deadlock. For example, if both processes call recv() first there is no-one left to call a corresponding send() and the program is stuck forever. Typical point-to-point communication patterns are shown below. Incorrect ordering of sends and receives may result in a deadlock. © CC-BY-NC-SA 4.0 by CSC - IT Center for Science Ltd.
https://www.futurelearn.com/courses/python-in-hpc/0/steps/65143?main-nav-submenu=main-nav-categories
CC-MAIN-2020-29
en
refinedweb
I understand the logical steps of asymmetric key cryptography as it relates to TLS, however, I’ve started using Git and in a bid to avoid having to use a password for authentication, I’ve set up ssh keys for passwordless authentication. I understand that these ssh keys are complements of each other but I do not understand how the actual authentication is taking place. I’ve copied the public key to Git and stored the private key locally. As such, I am able to do what I set out to do (passwordless authentication) but I do not know the underlying steps as to why the authentication is successful. I’ve tried searching the web but every answer I’ve found thus far has been too high level in that they did not specify the steps. For example, were I looking for the TLS steps, I’d expect something along the lines of: Check cert of https page (server) – Grab public key and encrypt secret with it – Securely send secret to server which should be the only entity with the corresponding private key to decrypt – Server and client now switch over to encrypted communications using the, now, shared secret. Tag: steps Steps to take to ensure Android security? [closed] I am aware that I should keep android up to date and have an anti virus like MalwareBytes. I also use VPN for connections. What other steps should I take to secure my android phone? In addition, how can I check which apps are transmitting data? (I also scan apps using the Play Protect). If this question has already been answered in detail, could you link to it please? Is $nHALT$ undecidable even if $M$ halts on input $w$ in finite steps If we have the language $ nHALT=\{<M,w,n>;$ $ M$ halts on input $ w$ in less than $ n$ steps$ \}$ Is this language also undecidable in the same way that $ HALT$ is undecidable? And if so, $ nHALT\notin P$ , right? How to proof that Turing machine that can move right only a limit number of steps is not equal to normal Turing machine I need to prove that a Turing machine that can move only k steps on the tape after the last latter of the input word is not equal to a normal Turning machine. My idea is that given a finite input with a finite alphabet the limited machine can write only a finite number of “outputs” on the tape while a normal Turing machine has infinite tape so it can write infinite “outputs” but I have no idea how to make it a formal proof. any help will be appreciated. Probabilistic Turing machine – Probability that the head has moved k steps to the right on the work tape I have a PTM with following transition: $ \delta(Z_0, \square , 0) = \delta(Z_0, \square , L, R)$ , $ \delta(Z_0, \square , 1) = \delta(Z_0, \square , R, R)$ Suppose that this PTM executes n steps. What is the probability that the head has moved k steps to the right on the work tape (in total, i.e., k is the difference between moves to the right and moves to the left) ? 4 Steps to Reach Your Goals Achieve Success FASTER Emotional Speech 4 Steps to Reach Your Goals Achieve Success FASTER Emotional Speech Set goals. Setting goals and putting them in a plan is important for achieving them, as there are some basics to follow when setting goals, including: Clearly define the goal, which is what needs to be achieved, and be measurable, in addition to its realism in order to challenge the person himself, while avoiding setting impossible goals to avoid frustration and failure, and a time limit must be set to achieve them. Setting… 4 Steps to Reach Your Goals Achieve Success FASTER Emotional Speech Minimum steps to sort array Consider. Number of `m` length walks from a vertice with steps in [1, s] The problem is stated as the following: We are given a grid graph $ G$ of $ N \times N$ , represented by a series of strings that describe vertices s.t. - $ L$ is the vertice we’re interested in - $ P$ are vertices that are unavailable - $ .$ are vertices that are available e.g.: .... ...L ..P. P... Would mean a graph that looks like this 0 1 2 3 +-------------------+ 0| | | | | | | | | | +-------------------+ 1| | | | | | | | | | +-------------------+ 2| | |XXXX| | | | |XXXX| | +-------------------+ 3|XXXX| | | | |XXXX| | | | +-------------------+ Where $ v_{2,3}$ and $ v_{0,3}$ are unavailable and we’re interested in $ v_{3,1}$ . From each vertice we’re only allowed to move on the axis (we can’t move on the diagonal) and a move is valid from $ v_{x,y}$ to $ v_{q,p}$ if - $ |x-q| + |y-p| \leq s$ and $ v_{q,p}$ is available. - Staying in the same spot is also a valid move Given $ m$ – maximal number of moves and $ s$ what is the number of ways we can make $ m$ moves from vertice designated by $ L$ . My attempt was to - First compute the neighbors reachable from each node. Create a look s.t. $ \forall v N[v]$ is the list of reachable nodes from $ v$ - Then build a starting record $ M_0$ s.t. if node is reachable $ M[i][j] = 1$ and $ 0$ otherwise. - Then for each step calculate for $ \forall i,j \in N$ (all the grid) $ M_{i}[i][j] = \sum_{v\in N[v]} M_{i-1}[v_i][v_j]$ (where $ v_i, v_j$ are the coordinates of $ v$ on the grid) and store in a matrix $ M_i$ We iterate until $ i==m$ . - for each $ v_{i,j}$ : 1. for each neighbor $ n$ of $ v_{i,j}$ : 1. $ M[i][j] += M'[n_i][n_j]$ Unfortunately this doesn’t work (tried to do it with a pen and paper as well to make sure) and I get fewer results then the expected answer, apparently there should be 385 ways but I only get to 187. Here are the intermediate states for the above mentioned board: ---------------------------- 5 6 5 5 5 7 6 6 4 6 0 5 0 5 4 5 ---------------------------- 25 34 27 27 27 41 33 34 20 33 0 27 0 27 20 25 ---------------------------- 133 187 146 149 146 229 182 187 105 182 0 146 0 146 105 133 ---------------------------- This did work for e.g. m=2 and s=1 for the following board: 0 1 2 +---+---+---+ 0| | | | | | | | +-----------+ 1| | | | | | | | +-----------+ 2| | | | | | | | +---+---+---+ Here’s my reference code ( findWalks is the main function) using namespace std; using Cord = std::pair<size_t, size_t>; auto hash_pair = [](const Cord& c) { return std::hash<size_t>{}(c.first) ^ (std::hash<size_t>{}(c.second) << 1); }; using NeighborsMap = unordered_map<Cord, vector<Cord>, decltype(hash_pair)>; inline vector<vector<int>> initBoard(size_t n) { return vector<vector<int>>(n, vector<int>(n, 0)); } Cord findPOI(vector<string>& board) { for (size_t i=0; i < board.size(); i++) { for (size_t j=0; j < board.size(); j++) { if (board[i][j] == 'L') { return make_pair(i, j); } } } return make_pair(-1, -1); } NeighborsMap BuildNeighbors(const vector<string>& board, size_t s) { NeighborsMap neighbors(board.size() * board.size(), hash_pair); for (size_t i = 0; i < board.size(); i++) { for (size_t j = 0; j < board.size(); j++) { size_t min_i = i > s ? i - s : 0; size_t max_i = i + s > board.size() - 1 ? board.size() - 1 : i + s; size_t min_j = j > s ? j - s : 0; size_t max_j = j + s > board.size() - 1 ? board.size() - 1 : j + s; auto key = make_pair(i, j); if (board[i][j] != 'P') { for (size_t x = min_i; x <= max_i; x++) { if (board[x][j] != 'P' && x != i) { neighbors[key].push_back(make_pair(x, j)); } } for (size_t y = min_j; y <= max_j; y++) { if (board[i][y] != 'P' && y != j) { neighbors[key].push_back(make_pair(i, y)); } } neighbors[key].push_back(key); } else { neighbors[key].clear(); } } } return neighbors; } int GetNeighboursWalks(const Cord& cord, NeighborsMap& neighbors, const vector<vector<int>>& prevBoard) { int sum{ 0 }; const auto& currentNeighbors = neighbors[cord]; for (const auto& neighbor : currentNeighbors) { sum += prevBoard[neighbor.first][neighbor.second]; } return sum; } int findWalks(int m, int s, vector<string> board) { vector<vector<int>> currentBoard = initBoard(board.size()); vector<vector<int>> prevBoard = initBoard(board.size()); std::unordered_map<int, std::vector<Cord>> progress; auto poi = findPOI(board); NeighborsMap neighbors = BuildNeighbors(board, s); for (const auto& item : neighbors) { const auto& key = item.first; const auto& value = item.second; prevBoard[key.first][key.second] = value.size(); } for (size_t k = 1; k <= static_cast<size_t>(m); k++) { for (size_t i = 0; i < board.size(); i++) { for (size_t j = 0; j < board.size(); j++) { auto currentKey = make_pair(i, j); currentBoard[i][j] = board[i][j] != 'P' ? GetNeighboursWalks(currentKey, neighbors, prevBoard) : 0; } } std::swap(currentBoard, prevBoard); } return prevBoard[poi.first][poi.second]; } CFG to CNF, but stuck on the last few steps I recently learned about the conversion, but I seem to be stuck. I need to convert the following CFG to CNF: $ S → XY$ $ X → abb|aXb|e$ $ Y → c|cY$ - There is no S on the right side, so I did not need to add another - I removed the null productions $ X → e$ $ S→XY|Y$ $ X→abb|aXb|ab|$ $ Y→c|cY|$ I removed unit production $ S→Y$ $ Y→c$ with $ S→c$ There are no production rules which have more than 2 variables Here, I struggle. I am not allowed to have a terminal symbol and a variable together, but I am not sure how to get rid of these. New grammar after step 4: $ S→XY|c$ $ X→abb|aXb|ab$ $ Y→ZY$ $ Z→c$ I managed to replace the symbol c with Z, and added the new rule, so that seems fixed. However, I am unsure what do do with $ aXb$ . Is this okay so far? if yes, what step should i take next? Thank you in advance! Setting different steps for Y-Axes of ListPlot I am trying to set the scaling interval of my Y-Axes different than Mathematica automatically does. So in steps of 1000 instead of 2000 (see picture) At the moment I determined following PlotRange: PlotRange -> {{1, 10}, {0, 8000}} Is there a simple option? To get an overview over the command: ListPlot [ MapThread[ If[Or[#1[[1]] === 3., #1[[1]] === 8.], Callout[Tooltip[#1, #2], #2], Tooltip[#1, #2]] &, Annotated2], FrameLabel -> {"Happiness Score", "Education Expenditure (per capita)"}, Frame -> True, GridLines -> Automatic, LabelStyle -> {GrayLevel[0]}, PlotRange -> {{1, 10}, {0, 8000}}] // Framed
https://proxieslive.com/tag/steps/
CC-MAIN-2020-29
en
refinedweb
#include <deal.II/fe/mapping_q_cache.h> This class implements a caching strategy for objects of the MappingQ family in terms of the MappingQGeneric::compute_mapping_support_points() function, which is used in all operations of MappingQGeneric. The information of the mapping is pre-computed by the MappingQCache::initialize() function. The use of this class is discussed extensively in step-65. Definition at line 46 of file mapping_q_cache.h. Constructor. polynomial_degree denotes the polynomial degree of the polynomials that are used to map cells from the reference to the real cell. Definition at line 26 of file mapping_q_cache.cc. Copy constructor. Definition at line 34 of file mapping_q_cache.cc. Destructor. Definition at line 43 of file mapping_q_cache.cc. clone() functionality. For documentation, see Mapping::clone(). Reimplemented from MappingQGeneric< dim, spacedim >. Definition at line 57 of file mapping_q_cache.cc. Returns false because the preservation of vertex locations depends on the mapping handed to the reinit() function. Reimplemented from MappingQGeneric< dim, spacedim >. Definition at line 66 of file mapping_q_cache.cc. Initialize the data cache by computing the mapping support points for all cells (on all levels) of the given triangulation. Note that the cache is invalidated upon the signal Triangulation::Signals::any_change of the underlying triangulation. Definition at line 75 of file mapping_q_cache.cc. Initialize the data cache by letting the function given as an argument provide the mapping support points for all cells (on all levels) of the given triangulation. The function must return a vector of Point<spacedim> whose length is the same as the size of the polynomial space, \((p+1)^\text{dim}\), where \(p\) is the polynomial degree of the mapping, and it must be in the order the mapping or FE_Q sort their points, i.e., all \(2^\text{dim}\) vertex points first, then the points on the lines, quads, and hexes according to the usual hierarchical numbering. No attempt is made to validate these points internally, except for the number of given points. Definition at line 91 of file mapping_q_cache.cc. Return the memory consumption (in bytes) of the cache. Definition at line 129 of file mapping_q_cache.cc. This is the main function overriden from the base class MappingQGeneric. Reimplemented from MappingQGeneric< dim, spacedim >. Definition at line 142 of file mapping_q_cache.cc. The point cache filled upon calling initialize(). It is made a shared pointer to allow several instances (created via clone()) to share this cache. Definition at line 138 of file mapping_q_cache.h. The connection to Triangulation::signals::any that must be reset once this class goes out of scope. Definition at line 144 of file mapping_q_cache.h.
https://dealii.org/developer/doxygen/deal.II/classMappingQCache.html
CC-MAIN-2020-10
en
refinedweb
Filtering time series data to remove a specific component in frequency space shows up everywhere in physics and engineering. This blog post describes a generic approach to narrowband filter design and application using Python. In some cases the filters developed in this approach would not be practical to build and implement in an embedded hardware application. You need a real electrical engineer for that! timedata is created from a sampling rate fsand a measurement time tmax. Artificial signal data is then created in three steps. data0is defined to model an exponential charge signal without noise, data1to model the signal with wideband noise, and data2to model the signal with wideband and narrowband noise. Wideband noise is modeled as a guassian source (i.e. using np.random.randn) and narrowband noise is modeled as a 60Hz disturbance. The aim of a narrowband filter in this case would be to recover the data1signal when a 60Hz narrowband filter is applied to the data2signal. import numpy as np # create artificial time data fs = 400 # sampling rate, 1/sec tmax = 1 # measurement duration, sec time = np.arange(start=0, step=1/fs, stop=tmax) # create artificial signal data representing exponential charging tc = 0.2 # time constant, sec data0 = 1 - np.exp(-time / tc) # create wideband noise and narrowband noise at 60Hz wideband = 0.05 * np.random.randn(time.size) narrowband = 0.2 * np.cos(2 * np.pi * 60 * time) # create artificial signal data with noise sources data1 = data0 + wideband data2 = data0 + wideband + narrowband scipy.signal.welchfunction with default arguments. Consult Wikipedia or a controls engineer for a more refined approach. # get power spectral density of artificial data freq, psd = signal.welch(data, fs=fs) # freq in units of Hz psd = 10 * np.log10(psd) # psd in units of dB scipy scipyIIR filter design function ( scipy.signal.iirfilter) indicates there are five IIR filter types that can be designed—here is a good reference that explains differences in those. The example in this blog post uses a Chebyshev type 2 IIR filter due to its flat passband and sharp rolloff. The Python code below represents all steps to create and apply the filter to artificial data. scipy.signal.freqzcan be used to get filter response, and scipy.signal.welchcan be used to get the power spectral density of the filtered signal. Also note the conversions to end up in units of dB and Hz for the filter response and power spectral density. fn = fs / 2 # Nyquist frequency, Hz fsb = 4 # width of the stopband, Hz fstopband = (fdisturbance - fsb / 2, fdisturbance + fsb / 2) steepness = 0.92 # percentage measure of steepness of stopband boundaries # steepness = 100% means vertical stopband boundaries (unstable) gpass = 3 # max attenuation in passband, dB gstop = 20 # min attenuation in stopband, dB # create a filter based on the stopband parameters ws = np.array(fstopband) / fn boundary = fsb * (1 - steepness) / fn wp = np.array([ws[0] - boundary, ws[1] + boundary]) N, Wn = signal.cheb2ord(wp=wp, ws=ws, gpass=gpass, gstop=gstop) b, a = signal.iirfilter(N=N, Wn=Wn, rp=gpass, rs=gstop, btype='bandstop', ftype='cheby2') # get the filter response and covert units w, h = signal.freqz(b, a) w = w * fs / (2 * np.pi) # units of Hz h = 20 * np.log10(np.abs(h)) # units of dB # apply the filter to artificial data data2f = signal.lfilter(b, a, data2) # get power spectral density of the filtered signal freq, psd = signal.welch(data2f, fs=fs) # freq in units of Hz psd = 10 * np.log10(psd) # psd in units of dB scipy.signal.lfilterfunction and you are off. The next section of this blog post includes an interactive visualization of the complete narrowband filter design and application process, where parameters of the artificial data and filter parameters can be changed to see the effects.
https://rbtechblog.com/blog/notch_filter
CC-MAIN-2020-10
en
refinedweb
I. Installing the Prerequisites The jenkinsapi package is easy to install because it is pip-compatible. You can install it to your main Python installation or in a Python virtual environment by using the following command: pip install jenkinsapi You will also need to install requests, which is also pip-compatibe: pip install requests These are the only packages you need. Now you can move on to the next section! Querying Jenkins The first step that you will want to accomplish is getting the jobs by their status. The standard statuses that you will find in Jenkins are SUCCESS, UNSTABLE or ABORTED. Let’s write some code to find only the UNSTABLE jobs: from jenkinsapi.jenkins import Jenkins from jenkinsapi.custom_exceptions import NoBuildData from requests import ConnectionError def get_job_by_build_state(url, view_name, state='SUCCESS'): server = Jenkins(url) view_url = f'{url}/view/{view_name}/' view = server.get_view_by_url(view_url) jobs = view.get_job_dict() jobs_by_state = [] for job in jobs: job_url = f'{url}/{job}' j = server.get_job(job) try: build = j.get_last_completed_build() status = build.get_status() if status == state: jobs_by_state.append(job) except NoBuildData: continue except ConnectionError: pass return jobs_by_state if __name__ == '__main__': jobs = get_job_by_build_state(url='', view_name='VIEW_NAME', state='UNSTABLE') Here you create an instance of Jenkins and assign it to server. Then you use get_view_by_url() to get the specified view name. The view is basically a set of associated jobs that you have set up. For example, you might create a group of jobs that does dev/ops type things and put them into a Utils view, for example. Once you have the view object, you can use get_job_dict() to get you a dictionary of all the jobs in that view. Now that you have the dictionary, you can loop over them and get the individual jobs inside the view. You can get the job by calling the Jenkin’s object’s get_job() method. Now that you have the job object, you can finally drill down to the build itself. To prevent errors, I found that you could use get_last_completed_build() to get the last completely build. This is for the best as if you use get_build() and the build hasn’t finished, the build object may not have the contents that you expect in it. Now that you have the build, you can use get_status() to get its status and compare it to the one that you passed in. If they match, then you add that job to jobs_by_state, which is a Python list. You also catch a couple of errors that can happen. You probably won’t see NoBuildData unless the job was aborted or something really odd happens on your server. The ConnectionError exception happens when you try to connect to a URL that doesn’t exist or is offline. At this point you should now have a list of the jobs filtered to the status that you asked for. If you’d like to drill down further into sub-jobs within the job, then you need to call the build’s has_resultset() method to verify that there are results to inspect. Then you can do something like this: resultset = build.get_resultset() for item in resultset.items(): # do something here The resultset that is returned varies quite a bit depending on the job type, so you will need to parse the item tuple yourself to see if it contains the information you need. Wrapping Up At this point, you should have enough information to start digging around in Jenkin’s internals to get the information you need. I have used a variation of this script to help me extract information on builds that have failed to help me discover jobs that have failed repeatedly sooner than I would have otherwise. The documentation for jenkinsapi is unfortunately not very detailed, so you will be spending a lot of time in the debugger trying to figure out how it works. However it works pretty well overall once you figure it out.
http://www.blog.pythonlibrary.org/2020/01/14/getting-jenkins-jobs-by-build-state-with-python/
CC-MAIN-2020-10
en
refinedweb
Synopsis Create and view spectral files for MARX (CIAO contributed package). Syntax from sherpa_contrib.marx import * Description The sherpa_contrib.marx module provides routines for creating and viewing the input spectral files for users of MARX (the Chandra simulator) and is provided as part of the CIAO contributed scripts package. The module can be loaded into Sherpa by saying either of: from sherpa_contrib.marx import * from sherpa_contrib.all import * where the second form loads in all the Sherpa contributed routines, not just the marx module. Contents The marx module currenly provides the following routines: See the ahelp file for the routine and the contributed scripts page for further information. Changes in the scripts 4.11.4 (2019) release Plotting can now use matplotlib The plot_marx_spectrum() routine now uses the Sherpa plot backend (controlled by the plot_pkg setting in a user's ~/.sherpa.rc file), rather than always using ChIPS. The Y axis now displays the units required by MARX - namely photon/cm^2/s/keV - rather than photon/cm^2/s. As part of this update the extra labelling in the plot - that gave the model name and dataset identifier - have been removed (although the model name is now included in the plot title). Changes in the scripts 4.11.2 (April 2019) release Fixes to save_marx_spectrum The sherpa_contrib.marx.save_marx_spectrum() function now normalizes the output by the bin width, as expected by MARX. Changes in the scripts 4.10.3 (October 2018) release The sherpa_contrib.marx module - which provides the save_marx_spectrum, plot_marx_spectrum, and get_marx_spectrum routines - is new in this release Bugs See the bugs pages for an up-to-date listing of known bugs.
https://cxc.cfa.harvard.edu/sherpa/ahelp/sherpa_marx.html
CC-MAIN-2020-10
en
refinedweb
Command line utility to search for TV and movie torrents and stream using Peerflix automatically. Project description Command line utility that allows you to search for TV and movie torrents and stream using Peerflix automatically. Ezflix provides advanced search capabilities including filtering by sort type (download count, seeds, likes), genres, minimum rating, etc. Ezflix also includes subtitle support where subtitles can be downloaded automatically for the chosen TV show or movie. Install ezflix is available on the Python Package Index (PyPI) at You can install,airplay}]] [--latest] [--subtitles] [--sort_by [{download_count,like_count,date_added,seeds,peers,rating,title,year}]] [--sort_order [{asc,desc}]] [--quality [{720p,1080p,3d}]] [--genre GENRE] [--remove] [--language LANGUAGE] [,airplay}]. --genre GENRE Used to filter by a given genre (See for full list) --remove Remove files on exit. --language LANGUAGE Language as IETF code. Set this argument to download subtitles in a given Search for thrillers released in 2017 and order by download count descending. $ ezflix movie '2017' --sort_by=download_count --sort_order=desc --genre=thriller Automatically download German subtitles for your chosen TV show or movie. $ ezflix movie 'Goodfellas' --subtitles --language=de Pass the quality argument to only list torrents of a given quality. $ ezflix movie 'They Live' --quality=720p Run development version Before any new changes are pushed to PyPi, you can clone the development version to avail of any new features. $ git clone $ cd ezflix $ virtualenv env $ source env/bin/activate $ pip install -r requirements.txt $ python setup.py install Tests The Python unittest module contains its own test discovery function, which you can run from the command line: $ python -m unittest discover tests/ Programmatic Usage You can use Ezflix programmatically in your own applications. Consider the following example: from ezflix import Ezflix ezflix = Ezflix(query="Goodfellas", media_type='movie') torrents = ezflix.get_torrents() if len(torrents) > 0: for torrent in torrents: print(torrent['title']) print(torrent['magnet']) first = torrents[0] file_path = ezflix.find_subtitles(first['title']) print(file_path).
https://pypi.org/project/ezflix/1.4.3/
CC-MAIN-2020-10
en
refinedweb
! Changing a ComboBox The first step when trying something new out is to write some code. You’ll need to create an instance of wx.ComboBox and pass it a list of choices as well as set the default choice. Of course, you cannot create a single widget in isolation. The widget must be inside of a parent widget. In wxPython, you almost always want the parent to be a wx.Panel that is inside of a wx.Frame. Let’s write some code and see how this all lays out: import wx class MainPanel(wx.Panel): def __init__(self, parent): super().__init__(parent) self.cb_value = 'One' self.combo_contents = ['One', 'Two', 'Three'] self.cb = wx.ComboBox(self, choices=self.combo_contents, value=self.cb_value, size=(100, -1)) self.cb.Bind(wx.EVT_TEXT, self.on_text_change) self.cb.Bind(wx.EVT_COMBOBOX, self.on_selection) def on_text_change(self, event): current_value = self.cb.GetValue() if current_value != self.cb_value and current_value not in self.combo_contents: # Value has been edited index = self.combo_contents.index(self.cb_value) self.combo_contents.pop(index) self.combo_contents.insert(index, current_value) self.cb.SetItems(self.combo_contents) self.cb.SetValue(current_value) self.cb_value = current_value def on_selection(self, event): self.cb_value = self.cb.GetValue() class MainFrame(wx.Frame): def __init__(self): super().__init__(None, title='ComboBox Changing Demo') panel = MainPanel(self) self.Show() if __name__ == "__main__": app = wx.App(False) frame = MainFrame() app.MainLoop() The main part of the code that you are interested in is inside the MainPanel class. Here you create the widget, set its choices list and a couple of other parameters. Next you will need to bind the ComboBox to two events: - wx.EVT_TEXT – For text change events - wx.EVT_COMBOBOX – For changing item selection events The first event, wx.EVT_TEXT, is fired when you change the text in the widget by typing and it also fires when you change the selection. The other event only fires when you change selections. The wx.EVT_TEXT event fires first, so it has precedence over wx.EVT_COMBOBOX. When you change the text, on_text_change() is called. Here you will check if the current value of the ComboBox matches the value that you expect it to be. You also check to see if the current value matches the choice list that is currently set. This allows you to see if the user has changed the text. If they have, then you want to grab the index of the currently selected item in your choice list. Then you use the list’s pop() method to remove the old string and the insert() method to add the new string in its place. Now you need to call the widget’s SetItems() method to update its choices list. Then you set its value to the new string and update the cb_value instance variable so you can check if it changes again later. The on_selection() method is short and sweet. All it does is update cb_value to whatever the current selection is. Give the code a try and see how it works! Wrapping Up Adding the ability to allow the user to update the wx.ComboBox‘s contents isn’t especially hard. You could even subclass wx.ComboBox and create a version where it does that for you all the time. Another enhancement that might be fun to add is to have the widget load its choices from a config file or a JSON file. Then you could update on_text_change() to save your changes to disk and then your application could save the choices and reload them the next time you start your application. Have fun and happy coding!
http://www.blog.pythonlibrary.org/2020/01/08/letting-users-change-a-wx-comboboxs-contents-in-wxpython/
CC-MAIN-2020-10
en
refinedweb
@Target(value={PARAMETER,ANNOTATION_TYPE}) @Retention(value=RUNTIME) @Documented public @interface AuthenticationPrincipal Authentication.getPrincipal()to a method argument. public abstract boolean errorOnInvalidType ClassCastExceptionshould be thrown when the current Authentication.getPrincipal()is the incorrect type. Default is false. public abstract String expression For example, perhaps the user wants to resolve a CustomUser object that is final and is leveraging a UserDetailsService. This can be handled by returning an object that looks like: public class CustomUserUserDetails extends User { // ... public CustomUser getCustomUser() { return customUser; } }Then the user can specify an annotation that looks like: @AuthenticationPrincipal(expression = "customUser")
https://docs.spring.io/spring-security/site/docs/4.2.x/apidocs/org/springframework/security/core/annotation/AuthenticationPrincipal.html
CC-MAIN-2020-10
en
refinedweb
Hi All, Brand new PhpStorm user here :) I'm developing a few apps using the Laravel framework, and the completion is not working at all :( It's probably just me. I'm wondering if there's something special I need to do to make it read the laravel files to be able to use them as a completion source. Eg. There's several classis that get used statically, so when I type: Route:: i would expect the members/properties etc to show up, but it just says: Undefined class Route. Any help or feedback would be much appreciated. Cheers. Hi All, Hi Matt, Could you please provide a link for that framework (download link) as well as some basic and very simple example project to look into? Heya, Laravel is available from: Download link is top right of the front page @ There's a demo app from an earlier version (same problem though) available here: Cheers. Hi Matt, It's the class aliases in the config file that are the problem. There's a Laravel forum thread about it here: And more recently here: The 'gist' of which is that you need to provide PhpStorm with a file in the project that it can parse for class references, but you will never need to load because of the aliases. Thanks for that, i combined everything in those posts and updated a little and it's nearly all working now :) The only thing that's not working correctly is absence of suggestions and false flagging when using the classes like this here on line 54 (the title, url_title, etc, properties). Don't think there'd be a quick fix for that though. So .. framework itself is namespace-based .. but the actual code of app does not use namespaces (have not digged into it much -- but somehow it finds correct classes during runtime). So .. instead of properly inserting useline where such class will be used (as required per namespaces specs), you just by-passing it by declaring some fake classes ... Well -- whatever -- as long as it works and you are happy. It is not "false flagging" -- you are expecting PhpStorm to discover your dynamic fields (created at runtime) and issue no warning. How should I know for sure if that is indeed valid field and not some typo? Same for PhpStorm. There are multiple "fixes" for that: 1. Manually declare such variable as instance of stdClass. Your particular example (line 53): The idea is -- unknown fields on variables of stdClass will not be marked as undefined (no warnings). No code suggestion (code completion) for such dynamic fields will be available. 2. Relax inspection a bit (still no code completion for such fields) -- Settings | Inspections | PHP | Undefined | Undefined field --> Downgrade severity if __magic methods are present in class This will be applied to whole project (Unless you define custom scopes and configure this inspection differently for each scope). Such unknown fields still will be highlighted, but with lower severity/not-so-visible (not an error/warning, just notice). 3. (The most proper/recommended/cross-IDE-safe way IMO) Define such fields in advance using @property syntax -- code completion will be available. Your particular example File: \application\models\news.php Just in case also see: I am unable to get this to work properly with PHPStorm 5.0.1. I've put the ide_helper.php file in my project root but it's not working. RESOLVED: I used a different ide_helper.php file. Can you share your directory structure and the content of the ide_helper.php? I'm on pS 5 and it works fine, by the was I have mine named "_IDE_HELPER.php" I created a folder called ide_helper in the project root and placed the ide_helper.php file in there. I then included the ide_helper folder in the PHP include path. Everything appears to be working as expected now. I have been messing with CodeIgniter for the past couple of days but found Laravel and wanted to start playing around with it. It seems that it has a much more rich feature set. Laravel is great, and you can always get great help from the community IRC #laravel Yes! It seems great thus far! If someone's still interested in a helper file - see this GitHub repository. It works just fine and can be integrated as an external library.
https://intellij-support.jetbrains.com/hc/en-us/community/posts/207058725-PhpStorm-and-Laravel
CC-MAIN-2020-10
en
refinedweb
Daily Data Science Puzzle import numpy as np # popular instagram accounts # (millions followers) inst = [232, #"@instagram" 133, #"@selenagomez" 59, #"@victoriassecret" 120, #"@cristiano" 111, #"@beyonce" 76] #"@nike" inst = np.array(inst) superstars = inst > 100 print(superstars[0]) print(superstars[2]) What is the output of this puzzle? Numpy is a popular Python library for data science focusing on linear algebra. The following handy numpy feature will prove useful throughout your career. You can use comparison operators directly on numpy arrays. The result is an equally-sized numpy array with boolean values. Each boolean indicates whether the comparison evaluates to True for the respective value in the original array. The puzzle creates a list of integers. Each integer represents the number of followers of popular Instagram accounts (in millions). First, we convert this list to a numpy array. Then, we determine for each account whether it has more than 100 million followers. We print the first and the third boolean value of the resulting numpy array. The result is True for @instagram with 232 million followers and False for @victoriassecret with 59 million followers. Are you a master coder? Test your skills now! Related Video Solution True False
https://blog.finxter.com/numpy-array-compare-operator/
CC-MAIN-2020-10
en
refinedweb
Porblem in Ejb3 using TimerService with @Singleton and @ManagementSteam s Mar 21, 2014 7:55 AM Hi All, I am trying to understand an old code written by a colleague who has left the company now. The problem is about calling a start() method, which should then initialize a timeerservice and further invokes a method. The code was presumably working fine earlier on jboss 5, but since we have now migrated to jboss 6, it does not seems to work. The EJB is as follows: package com.abc.potsdam.middleware.cache; import java.util.StringTokenizer; import javax.annotation.Resource; import javax.ejb.EJB; import javax.ejb.SessionContext; import javax.ejb.Singleton; import javax.ejb.Stateless; import javax.ejb.Timeout; import javax.ejb.Timer; import javax.ejb.TimerService; import javax.ejb.TransactionAttribute; import javax.ejb.TransactionAttributeType; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.jboss.annotation.ejb.Service; import com.traveltainment.potsdam.common.configuration.ConfigService; @Singleton @Stateless @TransactionAttribute(TransactionAttributeType.NEVER) public class CacheLoaderBean implements CacheLoader, CacheLoaderManagement { private Log log = LogFactory.getLog(this.getClass()); @EJB(mappedName = "/ConfigServiceBean/local") ConfigService configService; @EJB DroolsSession droolsSession; @Resource private SessionContext sessionContext; /** * after 10000 ms starting of loading persistence units * * @param timer */ @Timeout public void startTimeout(Timer timer) { loadPersistenceUnitsToCache(); } public void loadPersistenceUnitsToCache() { String persistenceUnits = configService .getProperty("AvailablePersistenceUnits"); log.info("reloading persistence units: " + persistenceUnits); StringTokenizer stringTokenizer = new StringTokenizer(persistenceUnits, ","); while (stringTokenizer.hasMoreTokens()) { String persistenceUnitName = stringTokenizer.nextToken(); droolsSession.initSession(persistenceUnitName); } } public void start() { // create timer for 10000 ms to start loading cache TimerService timerService = sessionContext.getTimerService(); Timer timer = timerService.createTimer(10000, "cache loading timer"); } } The imported interfaces are: package com.abc.potsdam.middleware.cache; import org.jboss.ejb3.annotation.Management; @Management public interface CacheLoaderManagement { public void loadPersistenceUnitsToCache(); public void start(); } package com.abc.potsdam.middleware.cache; import javax.ejb.Local; import org.jboss.annotation.ejb.Management; @Local public interface CacheLoader { public void loadPersistenceUnitsToCache(); } The problem is that in my implementation bean, I don't see the start() method ever getting executed. If it is to be invoked via JMX-Console then also I am a bit unaware as to how I can do that? Since the start method never gets executed, the rest of the functionality is also never invoked. I have done a bit of research on the topic, and found out at some threads that probably @Singleton and @Management have some issues together when using jboss 6? The compilation is all done fine, it is only then the code never gets executed and I don't see the log messages appearing. Could some please give me some pointers as to how I can set a timer for my scenario here? Thanks. 1. Re: Porblem in Ejb3 using TimerService with @Singleton and @ManagementWolf-Dieter Fink Mar 21, 2014 8:27 AM (in response to Steam s) What you try to achieve? And what version of JBoss do you use, did you migrate to JBossAS6 - this is an old outdated version and it dosn't make sense - you should migrate to WildFly 8.0.0.Final. From the code @Stateless and @Singleton seems wrong the bean type is only one of it. 2. Re: Porblem in Ejb3 using TimerService with @Singleton and @ManagementSteam s Mar 21, 2014 8:55 AM (in response to Wolf-Dieter Fink) Thanks for replying, Wolf. My main objective is to execute the loadPersistenceUnitsToCache() every 10 seconds automatically. I have now removed the annotation @Stateless, but still no effect. The start method is not executed and am not sure if it should be executed through jmx-console or it gets executed automatically by the application server at deployment time? Yes the entire application is migrated from Jboss 5.1 to jboss-6.1.0.Final. Other parts of the application are working fine, but this one. Thanks. 3. Re: Porblem in Ejb3 using TimerService with @Singleton and @ManagementSteam s Mar 21, 2014 12:57 PM (in response to Steam s) I have now created a very simple Stateless bean, which is utilizing JavaEE 6 Scheduler annotation: [CODE] package com.abc.potsdam.middleware.cache; import javax.ejb.Schedule; import javax.ejb.ScheduleExpression; import javax.ejb.Stateless; import javax.ejb.Timeout; import javax.ejb.Timer; import javax.ejb.TimerConfig; import javax.ejb.TimerService; import javax.naming.Context; import javax.naming.InitialContext; import java.util.*; @Stateless public class AutoTimerBean { /** * This method will be invoked every day at 5:00:00 in the morning * @param timer The timer instance */ //@Schedule (hour = "5", persistent=false) @Schedule(second="*/10", minute="*", hour="*", dayOfWeek="*", dayOfMonth="*", month="*", year="*", info="MyTimer") private void executeEveryTenSeconds(Timer timer) { // do some task here. System.out.println("Auto-timer method invoked at " + new Date() + " for bean " + this.getClass().getSimpleName()); } } [/CODE] But strangely, I don't get to see the timer message appearing every ten seconds. In-fact it does not appear at all. I am using JEE 6 and Jboss 6.1-Final. If I see inside 'JMX MBean View', I can see the class as part of a .war deployed. Coudl someone please suggest why the timer is not getting executed and if i need to configure anything inside jmx-console? Thanks. 4. Re: Porblem in Ejb3 using TimerService with @Singleton and @ManagementSteam s Mar 24, 2014 5:51 AM (in response to Steam s) any help, people?
https://developer.jboss.org/thread/238303
CC-MAIN-2020-10
en
refinedweb
At 11:37 13/04/2005, Jozsef Kovacs wrote: Many thanks Jozsef, This is an excellent suggestion and we accept your motivation and commitment. Public discussion is extremely important and very valuable. We will respond positively, hopefully rapidly, and enthusiastically to questions raised here. We expect that there will be contributions from a range of people. The process of creating CML rests on an unwritten process (rather like the British Constitution, which is never written). There are certain fundamentals, rather similar to the IETF's "general consensus and running code". The essentials include: - CML must be conforming XML. (I have seen things calling themselves "CML" which did not even parse in generic XML tools). - where possible CML uses emerging W3C technology rather than inventing its own. Thus we use DOM, SAX, RDF. RSS, XSLT, XSD, etc. - CML interoperates with other XML languages through XML namespaces - the definition of CML is taken from the publications in peer-reviewed literature. This means that the latest formal specification is JCICS 2003. We all intend to conform to that - CML is an Open process to the extent that it is published, we receive contributions and acknowledge them, we promote and applaud interoperability. Where possible everything, including discussions like this, is openly visible and much should be made available through Open redistribution license such as Creative Commons and Budapest/Berlin/Bethesda declarations of Open Access. However it is not like Open source as you are not allowed to modify the definition of the specification. CML welcomes non-Open conformant implementations - and does not regard them as morally inferior. However CML cannot use closed source for its conformance testing. In the design of CML there are certain principles. - explicit semantics are preferable to implicit semantics, even at the cost of some verbosity - a feature should have been exposed to the community before being incorporated in the publication - new features are resisted until it is impossible to refuse them. We avoid bloat - therefore there is usually an experimental specification before the next formalisation. - features are not removed - this would be difficult for existing applications - but they may be obsoleted. - elements and attributes should - as far as possible - be context-independent. This means subsets of the schema can be used. For example a theochem application might only use molecule and atoms (no bonds, or anything else). - in all CML applications the default semantics of elements and attributes must be identical. Thus formalCharge represents an integral number of electrons removed from or added to an atom. However the convention attribute allows additional semantics to be added. For example there is little communal systematisation of bond orders and types. CML uses "1"/"2"/"3" (or "S/D/T" for "normal single/double/triple" bonds. Other values are allowed but should have a convention. Thus "4" could mean aromatic for convention="MDL" and quadruple for others. - CML applications may ignore foreign namespaces. For example a cml:molecule could contain an SVG element, or an SVG document could contain a cml:molecule - prefixes (e.g. cml:) are NOT hardcoded. They must be accompanied by a namespace declaration. - additional elements and attributes in the CML namespace are NOT allowed. It would be easy for files to collide if this were allowed. In the development of CML software there are also certain principles. - CML itself is not software. The equivalent of a bug is an inconsistency, and of a feature is an unhappy piece of design which is ugly or difficult to use. - software should strive to be conformant. We intend to produce conformance tools in the near future. - it should be easy to develop simple applications of CML. A CML processor may ignore elements and attributes if the interpretation does not depend on them. For example some current CML software does not interpret reaction or spectrum. - in principle all XML input should be passed to the output if required. However this requires significant DOM programming and the W3C DOM is not user-friendly. Therefore some CML processing may lose information. Ideally a roundtrip of readCML->writeCML->readCML should be lossless, but this is difficult to achieve. - All CML software should interpret information in the same way (unless it ignores it). It should not invent local semantics. Thus if 100 molecules are concatenate in a CML document the semantics are just that - 100 concatenated molecules. They are not necessarily snapshots on a dynamics trajectory, different experimental observations, etc. We are developing RDF as the method to annotate complex compound documents. - No conformant CML file should cause CML-aware software to crash, and error messages should be as informative as possible - e.g. "FOOBAR does not support the CML reaction element | the CML array syntax | the CML map/link vocabulary, etc. and this document will not be processed", "PLINGE has detected multiple CML molecule elements and displayed each in a separate panel. CML spectrum elements are ignored". Then users know slightly better - CML documents range over a very wide variety - molecules, comp chem output, instruction manuals, synthetic recipes, journal articles, etc. Multiple namespaces will be common. There is no default "best" way to display or process these and there is unlikely to be a "CML browser" that does everything. However there are likely to be generic tools which manage compound documents and which can accept CML plugins to display chemistry in foreign contexts. In general the CML in JCICS2003 has stood the test and there are very few immediate needs to change the vocabulary in major ways. About 2-3 (unintentionally) implicit semantics have been formalised by a new attribute. The creation of "CMLReact" has involved 2-3 additional elements and these will be submitted for publication shortly. CMLComp is being informed by many marked up outputs and the main need is for the semantics of basisSet to be enlarged. The solid state will be explored over the next year in a funded project. Most of the work involves consolidating and firming up the semantics on current vocabulary. This is hard, because chemistry is very sloppy over its information, but we are making progress. The primary mechanism is the JUMBO toolkit which is element/attribute centred. The semantics of every element is explored and most have been done. For each we create a range of unit tests - currently over 400. This will be amplified by conformance tests. We also now need to create communal dictionaries for the common uses of dictRef. Some of these have been collected for reactions, but they are also required for common CML concepts. Here again anyone cane create their own namespaced dictionaries; if there is communal agreement, terms in these may be raised to the communal CML area. Some areas of CML are more explored than others. Thus we have intensively explored reactions and are reasonably confident that the specifications is robust. We are exploring spectra but have some way to go. We have much experience with comp chem calculations on geometry optimisation and properties, but little on dynamics and ensembles. Recently we have made major advances in crystallography. CML is a meritocracy and participants are honoured by their contribution - see Eric Raymond's "Homesteading the Noosphere". Our methods are Open and we aim for interoperability. Very recently we have decided to define Web-service and related APIs to build large networks applications - see for a summary of some of this. We intend to summarise these, probably on the QSAR list. We cannot include non-Open software under "Blue Obelisk" but we can - if resources allow - highlight non-Open software that interoperates via CML. Thank you very much for catalysing this discussion. All members of this list are equally welcome and all contributions are taken in a positive spirit. Henry and I have moderated 30,000 emails in XML-DEV without a flamewar or spam. You may wish to ask questions, make suggestions, recount your experiences, etc. You may make product announcements *to the extent that they inform the list community about CML and steer clear of hype and vapourware*. For example it would be very useful to know that FOOBAR's parser could read 10,000 CML files per minute, or that they had a CML-compliant format for publishing logP, or that they had an Openly accessible dictionary of properties, but not that they could calculate 10,000 logP per minute or a secret algorithm for clustering molecules. Please also note that JUMBO is Open source and interoperates with other Open (Blue Obelisk) groups (e.g. CDK/JOELib/QSAR/JChemPaint/Jmol/QSAR/Octet/Openbabel). Messages are sometimes crossposted there, but should generally be consistent with the core philosophy of those lists. There are many things I have not written and this may be a good time to start introducing CML from scratch to some list members. P. PS. My part of this mail is re-usable under Creative Commons. Other parts might be re-usable under "fair use" (please note I am no longer at Nottingham, but at Cambridge). Peter Murray-Rust Unilever Centre for Molecular Informatics Chemistry Department, Cambridge University Lensfield Road, CAMBRIDGE, CB2 1EW, UK Tel: +44-1223-763069 Fax: +44 1223 763076 -- Jozsef Kovacs Software developer ChemAxon Ltd. Maramaros koz 3/A, Budapest, 1037 Hungary mailto:jozsef.kovacs@... We spent an hour yesterday discussing how to manage different types of inputs and the modularisation of the code to support this. Whatever we come up with has to support GUIs, commandline, workflows, WebServices and programmatic calls. It also has to support files, URLs, inputStreams, etc. I suggest we do this through InputSource and alternatively through File (if we wish to preserve the filename). I would value comments on this, including from those who aren't directly involved in programming JUMBO/Java as it may affect the XML result (the inclusion of the CMLName element). (AbstractBase is the seriously unfortunate name chosen for a CML Object - I would like to change this to CMLBase (we can't call it CMLObject or CMLElement as they are part of the language). Note that we are coming close to the need for a definitive dictionary of CML dictRef names such as cml:filename. My proposed API is: public final static String CML_FILENAME = "cml:filename"; /** reads files in a directory and transforms to AbstractBase. * each file must be XML and the documentElement taken from CML Schema. * files must conform to a regex * the filename (as java.io.File.getCanonicalPath()) * can be saved as a CMLName child of the documentElement with * dictRef=CML_FILENAME and the content=filename * files that cannot be read are skipped without comment * * @param dir directory containing the files * @param regex (as in java.lang.String.matches(regex)) filters the files * @param saveFilename add filename as CMLName child of documentElement * @return an array of the top level AbstractBase elements * @throws IOException cannot find directory */ public static AbstractBase[] readCMLObjectsFromDirectory (File dir, String regex, boolean saveFilename) throws IOException; /** reads file and transform to AbstractBase. * the file must be XML and the documentElement taken from CML Schema. * the filename (as java.io.File.getCanonicalPath()) * can be saved as a CMLName child of the documentElement with * dictRef=CML_FILENAME and the content=filename * * @param file the file * @param saveFilename add filename as CMLName child of documentElement * @return an AbstractBase element or null if read fails * @throws IOException cannot find file * @throws CMLException cannot interpret file as CML */ public static AbstractBase readCMLObject (File file, boolean saveFilename) throws IOException; /** reads InputSource and transform to AbstractBase. * each file must be XML and the documentElement taken from CML Schema. * * @param source the InputSource * @return an AbstractBase element or null if read fails * @throws CMLException cannot interpret source as CML */ public static AbstractBase readCMLObject(InputSource inputSource) throws IOException, CMLException; Peter Murray-Rust Unilever Centre for Molecular Informatics Chemistry Department, Cambridge University Lensfield Road, CAMBRIDGE, CB2 1EW, UK Tel: +44-1223-763069 Fax: +44 1223 763076 [Crossposted to 3 lists, please be considerate] [John Irwin] >>... Can you recommend software for >>preparing and manipulating CML files? If OE offered CML, we could and might >>offer CML tomorrow. There are many good tools for converting files to CML. First, some words about strategy. CML is powerful enough to hold compound documents such as compound data cards, computational chemistry output and (when combined with XHTML) complete scientific documents. So "converting to CML" can involve components such as molecules, reactions, their properties, spectra, eigenproperties, etc. In general CML can hold any information composed of simple datatypes (numbers, strings, array, matrixes, etc.) and predefined schema elements (reactions, spectra...). We are devising a mechanism for building complex datatypes (e.g. critical point, phase diagrams). Most people currently want to manage molecular data and I'll stick with that. (JohnI and I have already corresponded usefully so I believe that a Zinc entry consists of at least: * a molecule * its provenance * published names * published properties * calculated properties * intellectual property rights CML can manage all of this except the IPR. To summarise John's mail, Zinc consists of molecular information supplied by compound suppliers under contract, for which properties are calculated using software made available under contract and then collected in a database which itself has restrictions on use (e.g. only limited subsets can be distributed, and for restricted use]). CML is not capable of managing the complexity of this IPR so the converter would have to add this, preferable in RDF. [Note that this problem does not occur for Open data since we can simply add a BOAI or Creative Commons license.] The provenance (without rights) is managed by the DublinCore dc:creator and dc:publisher in CML: <metadataList> <metadata name="dc:creator" content="Foobarchem"/> <metadata name="dc:publisher" content="ji@..."/> </metadataList> CML can, in principle, hold everything else without loss. Since I don't know the range of properties I don't know which are complex, but assuming that most are scalar, then the simple approach is to render them as: <property dictRef="zinc:mpt"> <scalar units="units:celsius" min="121" max="123.5" errorBasis="range"/> </property> === OK, most people weren't expecting that! BUT provenance and redistribution is increasingly important. That is why the default action of OpenBabel when outputting CML is to add metadata. We would hope that if users add metadata to the input (only possible in CML) it would be transported through === I suspect the question could be rephrased as "how do I convert a file containing small-molecule information and produce a CML file which contains the atoms, bonds and their properties without loss? Each molecule is separately identifiable and there is no contextual linkage between them (e.g. they aren't poses, supramolecules, etc. The file(s) may contain many independent molecules and batch conversion is required" I currently know of the current tools, and would approach them in this order: * Openbabel. This has the widest range of file types and can deal with lists of molecules. Billy Tyrrell, Chris Morley, Geoff Hutchison and I have variously developed this and Henry Rzepa has carried out roundtripping. We intend to maintain this a flagship for CML conversion - i.e. if there is a problem we will try to respond. * JUMBO. We have concentrated on complex formats and currently offer * MDL Molfile, SDF (and RXN). This attempts to follow the published spec for V2000 files. However since some of the spec appears to be specific to MDL programs it is necessarily a subset, albeit a fairly comprehensive one. * MOL2 format taken from the Tripos spec. This again is a subset and does not address recognition of atom type and fragments. Not validated. * CDX and CDXML. Most of the spec relating to molecules and reaction, but not graphic layout, has been implemented. Since CD is a very graphically oriented format it is extremely easy to create objects which do not formally represent the semantics of the molecule. Conversion of any CDX file is likely to be lossy and fuzzy. * CIF. This is a complete interpretation of DDL1 with manual coding of some of the core dictionary. Although CIF can contain chemical structure information this is virtually never used. Hence we have to use heuristics to calculate the chemistry and this is almost lossless for GOOD CIFs (as published by Acta Cryst.) * SMILES. I think this is fairly complete and should include stereochemistry. * CDK. This has a range of file readers and a CML writer. We haven't been directly involved in the coding but correspond daily with the group. If there are any problems then I am sure the CDK group would be keen to address them and we'll help in the discussions. * JOELib. This has a wide range of functionality, including the calculation of properties. Again we are in frequent touch, and although I haven't used it for CML I am sure the authors are responsive. * BlueObelisk, WebServices and Taverna. () This is a recent movement among a number of OpenSource and Open Data groups to ensure interoperability. "File conversions" will increasingly be packaged as WebServices () or workflows (such as). Scientists can then select the services they require and compose their own application. This will include conversion, validation, checks for uniqueness, submission to repository, etc. I suspect that Zinc actually requires a Taverrna-like workflow for its maintenance. Taverna can be used to warp closed source programs, but of course these cannot be distributed. We offer WebServices for OpenBabel and JUMBO as above so anyone can link their conversion requirements. Also our WS are Open so anyone can clone them to avoid connection problems. We do not currently offer WebServices that use close source programs because there are usually license restrictions by the suppliers and WS cannot yet deal with complex IPR negotiations. There is no reason why we might not create some in the future - if so the WebService wrapper would probably be OpenSource. There are some other Open Source programs (with whose authors we have had discussions) which read and/or emit CML including: * BKChem * Ghemical I don't know whether these can be used in batch but as they are open source then anyone can add this. I am also sure they'd be keen to help. I don't know the degree of conformance. There are an increasing number of computational chemistry programs which emit (and often read) CML but this is out of scope in this thread. We welcome implementations and use of CML by for-profit organisations. CML itself is an openly published, read-only specification and does not require implementations to be OpenSource. It does, however, require best efforts to conform and we shall write more of this later. Although, in principle, it is possible to write conformant software by reading the spec, in practice no spec is completely watertight and we encourage discussion. Obviously any posting to this list advertises the origin of the poster, so companies may wish to mail privately and will get a private reply. However we have limited resources and cannot generally give extended free private advice. There are some closed source tools which read/emit CML. Some of their authors have not approached us at all. Others have approached us but expected us to provide complete CML implementations at our own expense. Since, at present, this not an attractive business proposition, we haven't been able to accept these offers. We note that some of them (unidentified) have since added "CML". We do not know the degree of conformance or comprehensiveness. Note that some of them are only available through purchase and we may not have access to them. We do know that some of them do not conform to the published CML specification and shall be advising them that this is inconsistent with the use of the term and mark "CML". Other list readers might like to comment, but please make sure that statements are factually correct and avoid political discussions. * ACDLabs. No public information on conformance. * CambridgeSoft. No public information on conformance. * Chemaxon (Marvin). We have had no contact from them. This company lists the CML elements they supports and adds many others in the same namespace which are not CML. The "CML" is therefore not conformant to the published Schema. There are also semantics which are incompatible with CML (e.g. the order of atoms may be important). This is "semantic pollution". We shall write to them soon, advising them that this is unacceptable. There are technical fixes to some of this such as the use of proprietary namespaces for attributes, elements and datatypes. * Foo. private communication. * Bar. private communication * Xyzzy. private communication. I shall write separately on compchem and semantics. P. Peter Murray-Rust Unilever Centre for Molecular Informatics Chemistry Department, Cambridge University Lensfield Road, CAMBRIDGE, CB2 1EW, UK Tel: +44-1223-763069 Fax: +44 1223 763076 Dear Prof. Irwin, Your message was forwarded to the CML-discuss mailing list. While=20 OpenEye itself doesn't offer CML support, Open Babel most certainly=20 does. (Open Babel grew out of the old OELib, though there are many=20 differences between OB and OELib/OEChem these days.) Babel supports a=20 wide variety of chemical file formats, with more formats added every=20 release. And of course, if you find something lacking in OpenBabel or discover a=20= bug -- unlike OpenEye, Babel has an open bug tracking system: Cheers, -Geoff -- -Dr. Geoffrey Hutchison <grh25@...> Cornell University, Department of Chemistry and Chemical Biology Abru=F1a Group On Apr 6, 2005, at 10:28 AM, Eugen Leitl wrote: > ----- Forwarded message from John Irwin <jji@...> ----- > > From: "John Irwin" <jji@...> > Date: Wed, 6 Apr 2005 07:22:48 -0700 > To: <e.willighagen@...>, <zinc-fans@...> > Cc: > Subject: RE: [Zinc-fans] database formats? > X-Mailer: Microsoft Office Outlook, Build 11.0.6353 > > Hi Egon > > ZINC depends extensively on current tools in the field for preparing=20= > and > manipulating chemical structure files. OpenEye and Cactvs, for=20 > example, do > not support CML as far as I know. I'm also not aware of any docking=20 > program > that reads CML files. (Please let me know if you know of one). Since=20= > ZINC > was primarily designed to serve the virtual screening community, it=20 > should > offer files in format that are used in that field. > > All that said, I recognize that ZINC can be useful beyond the=20 > computational > ligand discovery community, and so I take your suggestion of CML=20 > seriously. > It may well be the next format we offer. Can you recommend software = for > preparing and manipulating CML files? If OE offered CML, we could and=20= > might > offer CML tomorrow. > >=20 > interested > in new methods of macromolecular crystallography please see the Erice > Crystallography page. Coming soon: > 2006: Structure and Function of Large Molecular Assemblies > 2007: Engineering of Crystalline Materials Properties > 2008: =46rom Molecules to Medicine - Integrating Crystallography (Drug=20= > Design) > > >> -----Original Message----- >> From: zinc-fans-bounces@... >> [mailto:zinc-fans-bounces@...] On Behalf Of Egon Willighagen >> Sent: Wednesday, April 06, 2005 12:46 AM >> To: zinc-fans@... >> Subject: Re: [Zinc-fans] database formats? >> >> On Wednesday 06 April 2005 12:52 am, John Irwin wrote: >>> Correct. In ZINC SDF is currently generated from authoritative mol2 >>> and not the other way around. Current plans include augmenting SDF >>> files with numerous tagged data - not sure when that will appear. >> >> Have you thought about Chemical Markup Language (CML)? >> >> Egon >> _______________________________________________ >> Zinc-fans mailing list >> Zinc-fans@... >> >> > > _______________________________________________ > > ----- Forwarded message from John Irwin <jji@...> ----- From: "John Irwin" <jji@...> Date: Wed, 6 Apr 2005 07:22:48 -0700 To: <e.willighagen@...>, <zinc-fans@...> Cc:=20 Subject: RE: [Zinc-fans] database formats? X-Mailer: Microsoft Office Outlook, Build 11.0.6353 Hi Egon ZINC depends extensively on current tools in the field for preparing and manipulating chemical structure files. OpenEye and Cactvs, for example, do not support CML as far as I know. I'm also not aware of any docking program that reads CML files. (Please let me know if you know of one). Since ZINC was primarily designed to serve the virtual screening community, it should offer files in format that are used in that field. All that said, I recognize that ZINC can be useful beyond the computational ligand discovery community, and so I take your suggestion of CML seriously. It may well be the next format we offer. Can you recommend software for preparing and manipulating CML files? If OE offered CML, we could and might offer CML tomorrow.=20 interested in new methods of macromolecular crystallography please see the Erice Crystallography page. Coming soon: 2006: Structure and Function of Large Molecular Assemblies 2007: Engineering of Crystalline Materials Properties 2008: From Molecules to Medicine - Integrating Crystallography (Drug Design) =20 > -----Original Message----- > From: zinc-fans-bounces@...=20 > [mailto:zinc-fans-bounces@...] On Behalf Of Egon Willighagen > Sent: Wednesday, April 06, 2005 12:46 AM > To: zinc-fans@... > Subject: Re: [Zinc-fans] database formats? >=20 > On Wednesday 06 April 2005 12:52 am, John Irwin wrote: > > Correct. In ZINC SDF is currently generated from authoritative mol2=20 > > and not the other way around. Current plans include augmenting SDF=20 > > files with numerous tagged data - not sure when that will appear. >=20 > Have you thought about Chemical Markup Language (CML)? >=20 > Egon > _______________________________________________ > Zinc-fans mailing list > Zinc-fans@... > >=20 _______________________________________________ At 15:32 01/04/2005, Egon Willighagen wrote: >Hi all, > >another question: how are radical representated in CML? In light have radical >reactions? I've seen the <electron> element, but this does not quite seem to >cover it... The simplest is to use the spinMultiplicity attribute on atom or molecule. This should be able to distinguish between a molecular ion (e.g. C6H6+., C6H6-., and a phenyl radical C6H5.). If there are many electrons with different symmetry properties (e.g. in transition metal ions) then <electron> will have to be used, but there aren't any semantics yet. Chris Morley of Openbabel fame uses radicals a lot and I think these are sufficient. P. >Egon > > >------------------------------------------------------- >This SF.net email is sponsored by Demarc: >A global provider of Threat Management Solutions. >Download our HomeAdmin security software for free today! > >_______________________________________________ >cml-discuss mailing list >cml-discuss@... > Peter Murray-Rust Unilever Centre for Molecular Informatics Chemistry Department, Cambridge University Lensfield Road, CAMBRIDGE, CB2 1EW, UK Tel: +44-1223-763069 Fax: +44 1223 763076 Billy has done a great job in investigating XOM as a an alternative DOM and has fitted this to CIFDOM. This is a sufficiently major change that it shouldn't be in Jumbo4.6 which is primarily a maintenance branch. (My fault - I should have flagged this up earlier. Our intention is that continuing extensions should be in odd-numbered branches, the next one being the (unpopulated) Jumbo4.7. Gemma and I are working on CMLReact and will shortly be committing to Jumbo4.7. Can we add the CIFDOM to that as well? And presumably we can unwind the CVS to before XOM/CIFDOM. The longer term strategy is to move to Java 1.5 and this will be reflected in JUMBO5. At present the only work on JUMBO5 is me and until it does something useful it will probably stay within UCC. The main thrusts are: * support for Java5 constructs, especially templating and collection management * reducing the weight of CMLDOM (few methods, and maybe based on XOM) * unit tests (I have ca 450 units tests so far, almost all for Tools). The tests can hopefully be retrofitted to Jumbo4.* (probably 4.7) If anyone has useful thoughts on moving from W3CDOM, please let us know. P. Peter Murray-Rust Unilever Centre for Molecular Informatics Chemistry Department, Cambridge University Lensfield Road, CAMBRIDGE, CB2 1EW, UK Tel: +44-1223-763069 Fax: +44 1223 763076 Hi all, another question: how are radical representated in CML? In light have radical reactions? I've seen the <electron> element, but this does not quite seem to cover it... Egon
https://sourceforge.net/p/cml/mailman/cml-discuss/?viewmonth=200504
CC-MAIN-2017-51
en
refinedweb
Neil Girdhar added the comment: Thanks for taking the time to explain, but it's still not working for me: from types import new_class class MyMetaclass(type): pass class OtherMetaclass(type): pass def metaclass_callable(name, bases, namespace): return new_class(name, bases, dict(metaclass=OtherMetaclass)) class MyClass(metaclass=MyMetaclass): pass try: class MyDerived(MyClass, metaclass=metaclass_callable): pass except: print("Gotcha!") try: MyDerived = new_class("MyDerived", (MyClass,), dict(metaclass=metaclass_callable)) except: print("Gotcha again!") So my questions are: 1. Why shouldn't Lib/types:new_class behave in exactly the same way as declaring a class using "class…" notation? 2. What's the point of checking if the metaclass is an instance of type? It seems to me that in Python 2, non-type metaclasses did not have to be the "most derived class" (that's what the documentation seems to suggest with the second rule). However, we no longer accept that in CPython 3 — neither in the Lib/types, nor in a regular declaration. In fact, the exception is: "metaclass conflict: " "the metaclass of a derived class " "must be a (non-strict) subclass " "of the metaclasses of all its bases"); So why not just check that unconditionally in Lib/types.py? ---------- _______________________________________ Python tracker <rep...@bugs.python.org> <> _______________________________________ _______________________________________________ Python-bugs-list mailing list Unsubscribe:
https://www.mail-archive.com/python-bugs-list@python.org/msg287920.html
CC-MAIN-2017-51
en
refinedweb
pcap_set_tstamp_precision.3pcap man page pcap_set_tstamp_precision — set the time stamp precision returned in captures Synopsis #include <pcap/pcap.h> int pcap_set_tstamp_precision(pcap_t *p, int tstamp_precision); Description pcap_set_tstamp_precision() sets the precision of the time stamp desired for packets captured on the pcap descriptor to the type specified by tstamp_precision. It must be called on a pcap descriptor created by pcap_create() that has not yet been activated by pcap_activate(). Two time stamp precisions are supported, microseconds and nanoseconds. One can use options PCAP_TSTAMP_PRECISION_MICRO and PCAP_TSTAMP_PRECISION_NANO to request desired precision. By default, time stamps are in microseconds. Return Value pcap_set_tstamp_precision() returns 0 on success if the specified time stamp precision is expected to be supported by the operating system, PCAP_ERROR_TSTAMP_PRECISION_NOTSUP if operating system does not support requested time stamp precision, PCAP_ERROR_ACTIVATED if called on a capture handle that has been activated. See Also pcap(3PCAP), pcap_get_tstamp_precision(3PCAP), pcap-tstamp(7)
https://www.mankier.com/3/pcap_set_tstamp_precision.3pcap
CC-MAIN-2017-51
en
refinedweb
Hi I'm making a simple calculator program and I've encountered errors that I've never heard of before. No matter what I do, I don't know how to fix this problem error C2106: '=' : left operand must be l-value What does that mean? I don't know what I did wrong. I am using Visual Studio 2008 I haven't done programming in a while so I'm kinda rusty... Please help! thank you #include <string> #include <iostream> #include <iomanip> using namespace std; void main ( ) { int choice; double a, b, c; char an; cout << "Please enter the operation you want" << endl; cout << "Press 1 for addition" << endl << "Press 2 for subtraction" << endl << "Press 3 for division" << endl << "Press 4 for multiplication" << endl; cin >> choice; cout << "Please enter the numbers you want to do calculation with" << endl; while (an = 'y') { if (choice == 1) { a + b = c; } if (choice == 2) { a - b = c; } if (choice == 3) { a / b = c; } if (choice == 4) { a * b = c; } cout << "Here is your result: " << c << endl; cout << "want to do it again?" << endl; cin >> an; } } I see two problems there: Your assignments are backwards (= assigns from right to left, *not* the other way around), and your while loop condition is an assignment when you probably want it to be a comparison. Well, three problems if you count the fact that you didn't use code tags. An "lvalue" is an expression which corresponds to a memory location: you can put a value there, so it can be a value on the left of an = assignment. An "rvalue" can be any arbitrary expression of the correct type; there is no need for it to correspond to a single memory location, because it's going to be assigned to an lvalue since it's on the right of an = assignment. Last edited by Lindley; December 2nd, 2008 at 11:31 PM. An lvalue is basically something that can be assigned to, an rvalue is basically a temporary. in your code, you have lines like... a+b = c; a+b makes a temporary object that holds the result of a+b, as its a temporary, its an rvalue and thus cannot be assigned to. c=a+b; would work fine. c is a named variable thats not const, it can be assigned to hence its an lvalue. So now we take the same temporary you were making before then assign that result to c. lvalue and rvalue were terms that originated for describing something that can be on the left hand side of the assignment operator and something that can only be on the right hand side of an assignment operator. oh okay. thank you. it works very well now. haha I forgot for a minute that it goes right to left View Tag Cloud Forum Rules
http://forums.codeguru.com/showthread.php?466488-Making-a-simple-calculator-error-C2106-left-operand-must-be-l-value
CC-MAIN-2017-51
en
refinedweb
[ ] Yongjun Zhang updated HDFS-7497: -------------------------------- Labels: supportability (was: ) > Inconsistent report of decommissioning DataNodes between dfsadmin and NameNode webui > ------------------------------------------------------------------------------------ > > Key: HDFS-7497 > URL: > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, namenode > Reporter: Yongjun Zhang > Assignee: Yongjun Zhang > Labels: supportability > Fix For: 2.7.0 > > Attachments: HDFS-7497.001.patch > > > It's observed that dfsadmin report list DNs in the decomm state while NN UI list DNs in dead state. > I found what happens is: > NN webui uses two steps to get the result: > * first collect a list of all alive DNs, > * traverse through all live DNs to find decommissioning DNs. > It calls the following method to decide whether a DN is dead or alive: > {code} > /** Is the datanode dead? */ > boolean isDatanodeDead(DatanodeDescriptor node) { > return (node.getLastUpdate() < > (Time.now() - heartbeatExpireInterval)); > } > {code} > On the other hand, dfsadmin traverse all DNs to find to all decommissioning DNs (check whether a DN is in {{AdminStates.DECOMMISSION_INPROGRESS}} state), without checking whether a DN is dead or alive like above. > The problem is, when a DN is determined to be dead, its state may still be {{AdminStates.DECOMMISSION_INPROGRESS}} . -- This message was sent by Atlassian JIRA (v6.3.4#6332)
http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-issues/201412.mbox/%3CJIRA.12760354.1418102447000.7269.1418362873947@Atlassian.JIRA%3E
CC-MAIN-2017-51
en
refinedweb
Originally posted by Chinmay Kant: Can anybody explain why the o/p of the following code is 1,4. I am not getting the logic:- public class Main { public static void main(String args[]){ int x =0; for(int i=0; i < 2; i++){ x +=(x += ++x); System.out.println(x); } } } Thanks in Advance
http://www.coderanch.com/t/267467/java-programmer-SCJP/certification/Operators
CC-MAIN-2015-40
en
refinedweb
Injecting Scripts You can inject .js files into webpages (a .js file is a text file with the .js extension, containing JavaScript functions and commands). The scripts in these files have access to the DOM of the webpages they are injected into. Injected scripts have the same access privileges as scripts executed from the webpage’s host. About Injected Scripts Injected scripts are loaded each time an extension-accessible webpage is loaded, so you should keep them lightweight. If your script requires large blocks of code or data, you should move them to the global HTML page. For details, see Example: Calling a Function from an Injected Script. An injected script is injected into every webpage whose URL meets the access limitations for your extension. For details, see Access and Permissions. Scripts are injected into the top-level page and any children with HTML sources, such as iframes. Do not assume that there is only one instance of your script per browser tab. If you want your injected script not to execute inside of iframes, preface your high-level functions with a test, such as this: Injected scripts have an implied namespace—you don’t have to worry about your variable or function names conflicting with those of the website author, nor can a website author call functions in your extension. In other words, injected scripts and scripts included in the webpage run in isolated worlds, with no access to each other’s functions or data. Injected scripts do not have access to the safari.application object. Nor can you call functions defined in an extension bar or global HTML page directly from an injected script. If your script needs to access the Safari app or operate on the extension—to insert a tab or add a contextual menu item, for example—you can send a message to the global HTML page or an extension bar. For details, see Messages and Proxies.. To add content to a webpage, use DOM insertion functions, as illustrated in Listing 10-1. Listing 10-1 Modifying a webpage using DOM insertion Adding a Script To add an injected script, follow these steps: Create an extension folder—open Extension Builder, click +, choose New Extension, give it a name and location. Drag your script file into the extension folder. Click New Script under Injected Extension Content in Extension Builder, as illustrated in Figure 10-1. Figure 10-1 Specifying injected content You can choose to inject your script as a Start Script or an End Script. An End Script executes when the DOM is fully loaded—at the time the onloadattribute of a bodyelement would normally fire. Most scripts should be injected as End Scripts. A Start Script executes when the document has been created but before the webpage has been parsed. If your script blocks unwanted content it should be a Start Script, so it executes before the page displays. Choose your script file from the pop-up menu. You can have both Start and End Scripts. You can have more than one script of each type. In order for your scripts to be injected, you must specify either Some or All website access for your extension. You can have your script apply to a single webpage, all webpages, or only certain webpages—pages from certain domains, for example. For details, see the description of whitelists and blacklists in Access and Permissions. Copyright © 2015 Apple Inc. All Rights Reserved. Terms of Use | Privacy Policy | Updated: 2015-09-16
https://developer.apple.com/library/safari/documentation/Tools/Conceptual/SafariExtensionGuide/InjectingScripts/InjectingScripts.html
CC-MAIN-2015-40
en
refinedweb
The JPA Overview's Chapter 12, Mapping Metadata and the JDO Overview's Section 15.7, “Joins” explain join mapping in each specification. All of the examples in those documents, however, use "standard" joins, in that there is one foreign key column for each primary key column in the target table. Kodo, Kodo will function properly. There is no special syntax for expressing a partial primary key join - just do not include column definitions for missing foreign key columns. In a non-primary key join, at least one of the target columns is not a primary key. Once again, Kodo supports this join type with the same syntax as a primary key join. There is one restriction, however: each non-primary key column you are joining to must be controlled by a field mapping that implements the kodo.jdbc.meta.Joinable interface. All built in basic mappings implement this interface, including basic fields of embedded objects. Kodo will also respect any custom mappings that implement this interface. See Section 7.10, “Custom Mappings” for an examination of custom mappings. Not all joins consist of only links between columns. In some cases you might have a schema in which one of the join criteria is that a column in the source or target table must have some constant value. Kodo. To form a constant join in JDO mapping, first set the column element's name attribute to the name of the column. If the column with the constant value is the target of the join, give its fully qualified name in the form <table name>.<column name>. Next, set the target. JPA: @Entity @Table(name="T1") public class ... { @ManyToOne @JoinColumns({ @JoinColumn(name="FK" referencedColumnName="PK1"), @JoinColumn(name="T2.PK2" referencedColumnName="'a'") }); private ...; } JDO: <class name="..." table="T1"> <...> <column name="FK" target="PK1"/> <column name="T2.PK2" target="'a'"/> </...> </class>: JPA: @Entity @Table(name="T1") public class ... { @ManyToOne @JoinColumns({ @JoinColumn(name="FK" referencedColumnName="PK2"), @JoinColumn(name="T2.PK1" referencedColumnName="2") }); private ...; } JDO: <class name="..." table="T1"> <...> <column name="FK" target="PK2"/> <column name="T2.PK1" target="2"/> </...> </class> Finally, from the inverse direction, these joins would look like this: JPA: ...; } JDO: <class name="..." table="T2"> <...> <column name="T1.FK" target="PK1"/> <column name="PK2" target="'a'"/> </...> <...> <column name="T1.FK" target="PK2"/> <column name="PK1" target="2"/> </...> </class>
http://docs.oracle.com/cd/E21764_01/apirefs.1111/e13946/ref_guide_mapping_notes_nonstdjoins.html
CC-MAIN-2015-40
en
refinedweb
Newlines in Sources.bz2 indices Bug Description Hi This looks similar to bug #435316, but I'm not sure it's the same. In http:// [...] Package: openoffice.org Binary: openoffice.org, broffice.org, openoffice. libuno- Version: 1:3.1.1- [...] there's a superfluous newline after libuno- [ This is breaking apt_pkg's parsing of this file which breaks germinate for the Ubuntu Moblin Remix image so it's quite high on my radar right now. ;-) ] Related branches - Colin Watson: Approve on 2009-10-15 - Gavin Panella (community): Approve on 2009-10-07 - Diff: 537 lines9 files modifiedlib/canonical/launchpad/emailtemplates/upload-accepted.txt (+1/-0) lib/canonical/launchpad/emailtemplates/upload-announcement.txt (+1/-0) lib/canonical/launchpad/emailtemplates/upload-new.txt (+1/-0) lib/canonical/launchpad/emailtemplates/upload-rejection.txt (+1/-0) lib/lp/archiveuploader/changesfile.py (+15/-1) lib/lp/archiveuploader/tagfiles.py (+57/-22) lib/lp/archiveuploader/tests/data/test436182_0.1_source.changes (+23/-0) lib/lp/archiveuploader/tests/test_tagfiles.py (+101/-37) lib/lp/soyuz/model/queue.py (+15/-7) It seems we have accepted some DSC with this format mistake: {{{ launchpad_prod=> SELECT spn.name as name, spr.version as version, o.name as owner, a.name as archive from sourcepackagere %'; name | version | owner | archive ------- openoffice.org | 1:3.1.1-2ubuntu1 | ubuntu-drivers | primary linux | 2.6.31- mono | 2.4.2.3+dfsg-2 | ubuntu-drivers | primary linux | 2.6.31- openoffice.org | 1:3.1.1- linux | 2.6.31-10.35~awe1 | awe | ppa linux | 2.6.31-10.36~eee1 | yofel | ppa linux | 2.6.31- linux | 2.6.31-10.36~eee2 | yofel | ppa linux | 2.6.31-10.36~eee3 | yofel | ppa linux | 2.6.31- openoffice.org | 1:3.1.1- (12 rows) This is somehow related with bug #435316, we have to find the right way to identify this format problem (I'm assuming it is a problem) and reject those uploads. Since there are not *many* yet, the migration of the existing that will be easy. From the Debian (and Ubuntu) Policy Manual: 5.6.19. `Binary' ---------------- This field is a list of binary packages. Its syntax and meaning varies depending on the control file in which it appears. When it appears in the `.dsc' file, it lists binary packages which a source package can produce, separated by commas[1]. It may span multiple lines. This is preventing OEM Services from running germinate for some Intel work we're doing. Very important this gets fixes ASAP. Just trying to reproduce this locally - with a pristine sources.txt in the same directory in a python console: {{{ f = open('sources.txt') import apt_pkg p = apt_pkg. while p.Step() == 1: ver = p.Section[ src = p.Section[ f.close() }}} Add *two* consecutive new-lines within the Binary: section as you like and save, then: {{{ f = open('sources.txt') p = apt_pkg. while p.Step() == 1: src = p.Section[ ver = p.Section[ }}} The two consecutive new-lines results in: Traceback (most recent call last): File "<console>", line 3, in ? KeyError: 'Version' But using just one new-line (as described in the bug) seems to work fine. This is with apt_pkg.Version == '0.7.20.2ubuntu6'. I'll try to find out which version of apt-pkg is being used and verify the problem. OK, exactly the same (two new-lines required) to replicate this on dogfood which has: $ dpkg-query -W apt python-apt apt 0.7.9ubuntu17. python-apt 0.7.4ubuntu7.5 which, after checking with cjwatson is the same as the production publisher. So it seems when the original dsc contains a single \n, we're publishing a Sources.bz2 with a *blank* line, which is incorrect and apt_pkg correctly fails to parse it. OK, now for the fix... The actual issue is in lp.archiveuploa I've created and tested an intermediate work-around that rstrips the dsc_binaries when creating the index file, which will give us a bit more time to solve the original issue properly. I'm setting the status back to Triaged, as what we've committed is a work-around - not a fix. It's a work-around that we can cherry-pick if necessary, and it will ensure that there are no trailing '\n' at end of fields in the Sources index file, so that the file can be correctly parsed by apt_pkg. The work-around does not ensure that the stray trailing '\n' will not get into the SPR.dsc_binaries field in the first place, nor does it fix the corrupt data. Hence leaving this bug open. I've not verified it yet, but we're 90% certain that when there are valid single '\n' chars within a field's values (as in the snippet in the bug description above), the lp.archiveuploa I can't see any reason why we are not simply using apt_pkg. As another example, currently the Sources in the following ppa (identified by Celso's query above) displays the problem also: http:// {{{ Package: linux Binary: linux-source- linux-image- Downloading http:// ppa.launchpad. net/moblin/ ppa/ubuntu/ dists/karmic/ main/source/ Sources. bz2 file ... ppa.launchpad. net/moblin/ ppa/ubuntu/ dists/karmic/ main/source/ Sources. bz2 file ... germinate- update- metapackage" , line 273, in <module> germinate/ Germinate/ Archive/ tagfile. py", line 121, in feed Sources" )) germinate/ Germinate/ germinator. py", line 323, in parseSources "Version" ] Decompressing http:// Traceback (most recent call last): File "/usr/bin/ germinator, [dist], components, architecture, cleanup=True) File "/usr/lib/ "source/ File "/usr/lib/ ver = p.Section[ KeyError: 'Version'
https://bugs.launchpad.net/launchpad/+bug/436182
CC-MAIN-2021-21
en
refinedweb
Setup Guide Android Project Setup Android Studio You can start either off with a blank project. The necessary steps are provided below. Alternatively you can use the SampleProject that is already bundled with the SDK, where the necessary configuration steps have already been made. Create a new Android Application Project Copy the file libs/wikitudesdk.aarinto the libs folder of your module. ( <project-root>/<module-name>/libs) Open build.gradlefrom your module, add the wikitudesdk.aaras a dependency and tell gradle to search the libs folder, like in the code below. android { ... } dependencies { compile fileTree(dir: 'libs', include: ['*.jar']) compile (name: 'wikitudesdk', ext:'aar') compile 'com.android.support:appcompat-v7:21.0.3' } repositories { flatDir{ dirs 'libs' } } - If you already purchased a license, please set the applicationIdto the package name you provided us with. defaultConfig { applicationId "xxxx" } - Add the following permissions to your AndroidManifest.xml : <uses-feature android: <uses-feature android: <uses-feature android: <uses-feature android: <uses-sdk android: - The activity holding the AR-View (called architectViewin the following) must have set android:configChanges="screenSize|orientation"in the AndroidManifest.xml, for example this could look like: <activity android: - Enter a valid trial license key. Read the chapter on how to obtain a free trial key. AR View in Activity Keep in mind that the Wikitude SDK is not a native Android SDK as you know from other SDK's. The basic concept is to add an architectView to your project and notify it about lifecycle events. The architectView creates a camera surface and handles sensor events. The experience itself, sometime referred to as ARchitect World, is implemented in JavaScript and packaged in your application's asset-folder (as in this project) or on your own server. The experiences are written in HTML and JavaScript and call methods in Wikitude's AR-namespace (e.g. AR.GeoObject).. It is recommended to handle your augmented reality experience in a separate Activity. Declare the architectView inside a layout XML. E.g. Add this within FrameLayout's parent tags. <com.wikitude.architect.ArchitectView android: ArchitectView is creating a camera surface so ensure to properly release the camera in case you're using it somewhere else in your application. Besides a camera (front or back-facing) the ArchitectView also makes use of compass and accelerometer values, for a full list of requirements refer to the list of supported devices. ArchitectView.isDeviceSupported(Context context) checks whether the current device has all required hard- and software in place or not. It is very important to notify the ArchitectView about life-cycle events of the Activity. Call architectView's onCreate(), onPostCreate(), onPause(), onResume(), onDestroy() inside your Activity's lifecycle methods. Best practice is to define a member variable for the architectView in your Activity. Set it right after setContentView in Activity's onCreate(), and then access architectView via member-variable later on. this.architectView = (ArchitectView)this.findViewById( R.id.architectView ); final ArchitectStartupConfiguration config = new ArchitectStartupConfiguration(); config.setLicenseKey( * license key */ ); this.architectView.onCreate( config ); architectView.onCreate( config ). Activity's onPostCreate() is the best place to load the AR experience. this.architectView.onPostCreate(); this.architectView.load( "YOUR-AR-URL" ); The architectView.load() argument is the path to the html file that defines your AR experience. It can be relative to the asset folder root or a web-url (starting with http:// or https://). e.g. architectView.load('arexperience.html') opens the html in your project's assets-folder, whereat architectView.load('') loads the file from a server. architectView.load('arexperience.html?myarg=1')
https://www.wikitude.com/external/doc/documentation/7.0/android/setupguideandroid.html
CC-MAIN-2021-21
en
refinedweb
This is part of a series I started in March 2008 - you may want to go back and look at older parts if you're new to this series. (I will be at the Brighton Ruby Conference on Monday July 20th if you're attending and follow my series or just want to talk about Ruby in general, I'm happy to have a chat) First order of business today is to get rid of this abomination from last time: # FIXME: ScannerString is broken -- need to handle String.new with an argument # - this again depends on handling default arguments s = ScannerString.new r = ch.__get_raw s.__set_raw(r) Essentially we want to be able to do both String.new and String.new "foo", and then inherit String with ScannerString, and use super to pass a String argument straight through to the super class constructory, so we can change the code above to: s = ScannerString.new(ch) This shouldn't be too hard: We pass the argument count to methods, and def foo arg1, arg2 = "something" end .. is equivalent to (pseudo code): def foo args arg1 = args[0] if <number of args> == 2 arg2 = args[1] else if <number of args> == 1 arg2 = "something" else [here we really should raise an exception] end end end The overheads Ruby imposes for all the nice high level stuff should start to become apparent now... We don't have support for raise yet, but we can at least output a message for the else case too while we're at it. Of course, the more arguments, the more convoluted the above will get, but it's not hard - we have %s(numargs) to get at the argument count, and we should be able to do this pretty much just with high level constructs. My first attempt was something like this: def foo arg1, arg2 = "default" %s(if (gt numargs 4) (printf "ERROR: wrong number of arguments (%d for 1)\n" (sub numargs 2)) (if (lt numargs 3) (printf "ERROR: wrong number of arguments (%d for 1)\n" (sub numargs 2)) (if (le numargs 3) (assign arg2 (__get_string "default")))) ) puts arg1 puts arg2 end Spot the mistake? It is slightly tricky. The problem is not in this function itself, but in how the arguments get allocated: Since the caller allocates space on the stack for the arguments, if arg2 is not being passed, then there's no slot for it, so we can't assign anything to it. One way out of this is to allocate a local variable, and copy either the default value or or the passed in argument to the local variable, and then rewrite all references to arg2 to local_arg2 in the rest of the function. Another variation is to instead of doing the rewrite, change the Function#get_arg method to allow us to "redirect" requests for the argument afterwards. We can do this similarly to how we reference numargs, by treating them as negative offsets We were getting ahead of ourselves. First lets update the parser. In 7bca9f3 I've added this to parse_arglist. Previously we just ignored the result of the call to the shunting yard parser. Now we keep it, and treat the argument differently if we have a default value: default = nil if expect("=") nolfws default = @shunting.parse([","]) end if prefix then args = [[name.to_sym, prefix == "*" ? :rest : :block]] elsif default args = [[name.to_sym, :default, default]] else args = [name.to_sym] end This change breaks a number of test cases, so there's also a series of updates to features/parser.feature To make this work, we also need to slightly change transform.rb to actually handle these arrays in the argument list (which brings up the question of whether or not it might not be cleaner to modify the parser to store them uniformly, but lets leave that for later). def rewrite_let_env(exp) exp.depth_first(:defm) do |e| - args = Set[*e[2]] + args = Set[*e[2].collect{|a| a.kind_of?(Array) ? a[0] : a}] Note that I'm not particularly happy with the approach I'm about to describe. It feels a bit dirty. Not so much the concept, but the implementation. I'd love suggestions for cleanups. But first, there's a test case in f25101b Next, we turn our attention to function.rb. Function and Arg objects are instantiated from the data in the parse tree, and we now need to handle the arguments with :default in them. We start by simply adding a @default in Arg (In 1a09e19 ). You'll note I've also added a @lvar that is not being set immediately, as this value will depend on the function object. But as the comment says, this will be used to let us "switch" this Arg object from referring to refer to a local variable that will effectively (and intentionally) alias the original in the case of defaults. class Arg - attr_reader :name, :rest + attr_reader :name, :rest, :default + + # The local variable offset for this argument, if it + # has a default + attr_accessor :lvar def initialize(name, *modifiers) @name = name # @rest indicates if we have # a variable amount of parameters @rest = modifiers.include?(:rest) + @default = modifiers[0] == :default ? modifiers[1] : nil end In Function, we add a @defaultvars that keeps track of how many extra local variable slots we need to allocate. In Function#initialize we change the initalization of the Arg objects, and once we've created it, we assign a local variable slot for this argument. + @defaultvars = 0 @args = args.collect do |a| - arg = Arg.new(*[a].flatten) + arg = Arg.new(*[a].flatten(1)) + if arg.default + arg.lvar = @defaultvars + @defaultvars += 1 + end We then initialize @defaults_assigned to false. This is a new instance variable that will act as a "switch". When it is false, compiling request for an argument with a default will refer to the argument itself. So we're not yet aliasing. At this stage, we need to be careful - if we read the argument without checking numargs, we may be reading random junk from the stack. Once we switch it to true, we're telling the Function object that we wish to redirect requests for arguments that have defaults to their freshly created local variables. We'll see how this is done when changing Compiler shortly. Let's now look at get_arg and some new methods, in reverse order of the source, Function#get_arg first: def get_arg(a) # Previously, we made this a constant for non-variadic # functions. The problem is that we need to be able # to trigger exceptions for calls with the wrong number # of arguments, so we need this always. return [:lvar, -1] if a == :numargs The above is basically a bug fix. It was a short-sighted optimization (I knew there was a reason why I don't like premature optimizations, but every now and again they are just too tempting) which worked great until actually getting the semantics right. And this is the start of the part I'm not particularly happy with: r = get_lvar_arg(a) if @defaults_assigned || a[0] == ?# return r if r raise "Expected lvar - #{a} / #{args.inspect}" if a[0] == ?# args.each_with_index do |arg,i| return [arg.type, i] if arg.name == a end return @scope.get_arg(a) end Basically, if the @defaults_assigned "switch" has been flipped, or we explicitly request a "fake argument", we call get_lvar_arg to try to look it up. If we find it, great. If not, we check for a normal variable. We feel free to throw an exception for debugging purposes if we're requesting a "fake argument", as they should only ever be created by the compiler itself, and should always exist if they've been created. The "fake argument" here is "#argname" if the arguments name is "argname". I chose to use the "#" prefix as it can't legally occur in an argument name, and so it's safe. It's still ugly, though. Next a helper to process the argument list and yield the ones with defaults, and "flip the switch" when done: def process_defaults self.args.each_with_index do |arg,index| # FIXME: Should check that there are no "gaps" without defaults? if (arg.default) yield(arg,index) end end @defaults_assigned = true end Last but not least, we do the lookup of our "fake argument": # For arguments with defaults only, return the [:lvar, arg.lvar] value def get_lvar_arg(a) a = a.to_s[1..-1].to_sym if a[0] == ?# args.each_with_index do |arg,i| if arg.default && (arg.name == a) raise "Expected to have a lvar assigned for #{arg.name}" if !arg.lvar return [:lvar, arg.lvar] end end nil end in Compiler#output_functions (still in 1a09e19), we add a bit to actually handle the default argument: - @e.func(name, func.rest?, pos, varfreq) do + @e.func(name, pos, varfreq) do + + if func.defaultvars > 0 + @e.with_stack(func.defaultvars) do + func.process_defaults do |arg, index| + @e.comment("Default argument for #{arg.name.to_s} at position #{2 + index}") + @e.comment(arg.default.inspect) + compile_if(fscope, [:lt, :numargs, 1 + index], + [:assign, ("#"+arg.name.to_s).to_sym, arg.default], + [:assign, ("#"+arg.name.to_s).to_sym, arg.name]) + end + end + end + + # FIXME: Also check *minium* and *maximum* number of arguments too. + This piece basically allocates a new stack frame for local variables, and then iterates through the arguments with defaults; we then compile code equivalent to if numargs < [offset]; #arg = [default value]; else #arg = arg; end where "arg" is the name of the current argument. What should have been utterly trivial, given the above, turned into a couple of annoying debugging sessions, due to bugs that were in fact exposed once I started checking the argument numbers... But let us start with the actual checks: + minargs = func.minargs + + compile_if(fscope, [:lt, :numargs, minargs], + [:sexp,[:call, :printf, + ["ArgumentError: In %s - expected a minimum of %d arguments, got %d\n", + name, minargs - 2, [:sub, :numargs,2]]], [:div,1,0] ]) + + if !func.rest? + maxargs = func.maxargs + compile_if(fscope, [:gt, :numargs, maxargs], + [:sexp,[:call, :printf, + ["ArgumentError: In %s - expected a maximum of %d arguments, got %d\n", + name, maxargs - 2, [:sub, :numargs,2]]], [:div,1,0] ]) + end These should be reasonably straightforward: Function#minargsshortly), we printfan error. printfanother error - unless there's a "splat" (*arg) in the argument list, in which case the number of arguments is unbounded. int 1or int 3, which are intended for debugging, but this is temporary until implementing proper exceptions, and I've not surfaced access to the intinstruction to the mid-layer of the compiler, so this is a lazy temporary alternative. In Function ( function.rb), this is #minargs and #maxargs: + def minargs + @args.length - (rest? ? 1 : 0) - @defaultvars + end + + def maxargs + rest? ? 99999 : @args.length + end (We could leave #maxargs undefined for cases where #rest? returns true, but really, we could just as well try to set a reasonable maximum - if it gets ridiculously high, it likely indicates a messed up stack again) One thing that I wasted quite a bit of time with when adding the min/max argument checks was that this triggered a few lingering bugs/limitations of the current register handling. The main culprit was the [:div, 1, 0] part. As it happens, because this forces use of %edx (if you remember, this is because on i386, idivl explicitly depends on %edx), which is a caller saved register, our lack of proper handling of caller saved registers caused spurious errors. The problem is that Emitter#with_register is only safe as long as nothing triggers forced evictions of registers within the code it uses. This code is going to need further refactoring to make it safer. For now I've added a few changes to make the current solution more resilient (at the cost of sometimes making it generate even less efficient code, but it should long since have been clear that efficiency is secondary until everything is working). The most important parts of this change is in Emitter and RegAlloc (in 3997a31) Firstly, we add a Emitter#caller_save, like this: def caller_save to_push = @allocator.evict_caller_saved(:will_push) to_push.each do |r| self.pushl(r) @allocator.free!(r) end yield to_push.reverse.each do |r| self.popl(r) @allocator.alloc!(r) end end This depends on an improvement of #evict_caller_saved that returns the registers that needs to be saved on the stack, because they've been allocated "outside" of the variable caching (for cached variables, we can "just" spill them back into their own storage locations, and reload them later, if they are referenced again). We then push them onto the stack, and explicitly free them, yield to let the upper layers do what it wants (such as generate a method call), and pop them back off the stack. There's likely lots to be gained from cleaning up the entire method call handling once the dust settles and things are a bit closer to semantically complete - we'll get back to that eventually, I promise. The new version of evict_caller_saved looks like this: def evict_caller_saved(will_push = false) to_push = will_push ? [] : nil @caller_saved.each do |r| evict_by_cache(@by_reg[r]) yield r if block_given? if @allocated_registers.member?(r) if !will_push raise "Allocated via with_register when crossing call boundary: #{r.inspect}" else to_push << r end end end return to_push end Basically, we iterate through the list of caller-save reigsters, and if we've indicated we intend to push the values on the stack, we return a list of what needs to be pushed. Otherwise we raise an error, as we risk losing/overwriting an allocated register. You will see Emitter#caller_save in use in compiler.rb in #compile_call and #compile_callm, where it simply wraps the code that evaluates the argument list and calls themselves. Other than that, I've made some minor hacky changes to make #compile_assign, #compile_index and #compile_bindex more resilient against these problems. The last, missing little piece is that it surfaced a bug in the handling of numargs with splats. In particular, once I added the min/max checks, the call to a class's #initialize triggered the errors, because this is done with ob.initialize(*args) in lib/core/class.rb, and it didn't set numargs correctly. I'm not going into this change in detail, as it's a few lines of modifications to splat.rb, compiler.rb and emitter.rb. The significant part of the change is that the splat handling now expects to overwrite %ebx, as it should, and Emitter#with_stack will now set %ebx outright unless it is called after the splat handling, in which case it will add the number of "fixed" arguments to the existing "splat argument count" in %ebx. (We'll still need to go back and fix the splat handling, as we're still not creating a proper array, and the current support only works for method calls) While working on this, it struck me that a future optimization to consider is different entry-points for different variations of the method. While the caller does not really "know" exactly which method will be called, the caller does know how many arguments it has, and so knows that either there is an implementation of a method that can take that number of arguments, or it will trigger an ArgumentError (or should do when we add exceptions). Thus, if we allocate vtable slots based on the number of provided arguments too, then static/compile-time lookups will fail for method calls where no known possible combination of method name / number of arguments is known at compile time, and it will fall back to the slow path (which we've not yet implemented). The question of course is how much it would bloa the vtables. However this approach can also reduce the cost of the code, as we can jump past the numargs checks for the version where all arguments are specified etc. Another approach is to decide on rules for "padding" the argument space allocated, so that the most common cases, of, say 1-2 default arguments can be supported without resorting to extra local variables. This saves some assignments. ... we'll fix super. Well, add it, as currently it ends up being treated as a regular method call.
https://hokstad.com/compiler/37-default-arguments
CC-MAIN-2021-21
en
refinedweb
Or more specifically, how to do this in an efficient way in a statically typed language, such as C++. Languages such as Smalltalk and ECMAscript / Javascript supports this easily, but apart from syntactic ease of composition, their approaches does not do much better than what you can reasonably easily implement in C++. One article always worth mentioning is Protocol Extension: A Technique for Structuring Large Extensible Software-Systems by Dr. Michael Franz, who outlines an experimental approach he deviced for Oberon. Warning: Long rant following... Lots of implementation details glossed over :-) A basic approach to type composition, or protocol extension in C++ is something like the code below, which mirrors an approach used by some component systems: #include <map> #include <string> #include <iostream> // ---- Generic stuff -------------- class InterfaceBase { public: virtual ~InterfaceBase() { } }; class Object { typedef std::map<std::string,InterfaceBase *> IfaceMap; IfaceMap ifaces_; protected: template<typename T> void addImpl(T * ob) { ifaces_.insert(std::make_pair(typeid(T).name(),ob)); } public: template<typename T> operator T *() { IfaceMap::const_iterator i = ifaces_.find(typeid(T).name()); if (i == ifaces_.end()) return 0; return dynamic_cast<T *>(i->second); } }; // --------- Interface declaration class FooInterface : public InterfaceBase { public: virtual void foo() = 0; }; // ---------- Specific interface implementations class FooImpl : public FooInterface { public: void foo() { std::cout << "foo" << std::endl; } }; class FooObject : public Object { public: FooObject() { addImpl<FooInterface>(new FooImpl()); } }; int main() { FooObject ob; FooInterface * foo = ob; foo->foo(); } The downside of this should be fairly obvious: It is painfully wasteful for small "objects". It's not really type composition at all, merely some syntactic tricks to make the fact that we're dealing with a container of disparate objects. A way to reduce the wastage for the above approach might be to make sure we only allocate one block of memory only. It's not particularly difficult - we'll need a factor which will need to know which interface implementations to use, and can determine the binary layout of the object, and then create a mapping table, much like the vtable used for virtual methods. The factory can then allocate the memory and use placement new to create the objects. The obvious issue with this method is that if you use malloc to create this object, it can't be deleted with the normal delete operator, and if you use the array version of new you have to remember to use the array version of delete. One workaround is to always wrap them in a smart pointer that knows how to deal with them, or use a garbage collector - otherwise you have to explicitly call a function that knows how to deal with the composed objects. A better approach is this: template<int Size> class Object { protected: char data[Size] buffer; // ... the rest... }; So instead of inheriting from Object the class, new types will inherit from Object<SomeCalculatedSize> the template instantiation. The next refinement here is probably to use typelists with a lot of template magic in order to simplify the creation of the composed types. But it's still inefficient: It contains a list of pointers to interface implementations in each object. And it's also cumbersome to use: You can cast TO an interface, but you can't cast FROM an interface to the base object. The first problem is "easy" to take care of by implementing our own "vtable". Instead of containing just a flat buffer, each Object contains a pointer to an array. Each type implementation can initialize each new object with a pointer to a type specific vtable that gives the offsets from the start of the object to the specific interface implementation. The second is more problematic. One way of doing this is to put a pointer back to the base of the object at a fixed offset from each interface implementation, but that means wasting an additional pointer for each implemented interface. Another approach is to use a "fat pointer" - a smart pointer that contains extra state, for instance a pointer to the base object. This saves us extra pointers in the object itself, but double the size of our "pointers", which could be really painful in some applications. A third approach is to use a specialized smart pointer for each interface that will "cast" an internal pointer to the object to the specific interface implementation pointer on each call. This saves memory at the cost of at best an extra table lookup for each call. This is more or less the approach chosen for the Protocol Extension paper referenced above. However in that case the table contained all methods, not interface implementation types. A table lookup is only feasible if the number of interfaces used is reasonably small and you can accept allocating a table large enough for all interfaces in the system for each object, though. If not, you'll incure the cost of a hash table lookup instead. None of these options are particularly attractive. One possibility is supporting more than one of them: Allow casting to a "cheap" smart pointer but not back, and provide a fat pointer alternative if you need to be able to cast both ways. In the end this probably boils down to application specific needs. But my quest for a method that's simple AND fast enough for general use continues...
https://hokstad.com/archives/2005/03/type_compositio
CC-MAIN-2021-21
en
refinedweb
Feature #16746closed Endless method definition Added by mame (Yusuke Endoh) about 1 year ago. Updated 4 months ago. 1 year 1 year ago - Related to Feature #5054: Compress a sequence of ends added Updated by shyouhei (Shyouhei Urabe) about 1 year ago - Related to Feature #5065: Allow "}" as an alternative to "end" added Updated by shyouhei (Shyouhei Urabe) about 1 year ago - Related to Feature #12241: super end added Updated by mame (Yusuke Endoh) about 1 year ago - File deleted ( endless-method-definition.patch) Updated by mame (Yusuke Endoh) about 1 year ago Updated by matz (Yukihiro Matsumoto) about 1 year ago - Tags deleted ( joke) Updated by mame (Yusuke Endoh) about 1 year 1 year ago Updated by ruurd (Ruurd Pels) about 1 year ago Updated by nobu (Nobuyoshi Nakada) about 1 year ago Updated by ioquatix (Samuel Williams) about 1 year ago Have you considered adopting Python's whitespace sensitive indentation? def hello(name): puts("Hello, #{ name }") hello("endless Ruby") #=> Hello, endless Ruby Updated by mame (Yusuke Endoh) about 1 year 1 year 1 year year ago I'd like to experiment with this new syntax. We may find drawbacks in the future, but to find them, we need to experiment first. Matz. Updated by nobu (Nobuyoshi Nakada) about 1 year year year ago It is not easy to control parsing time warnings, and bothers tests. Updated by Eregon (Benoit Daloze) about 1 year) 12 months ago Is it intended to allow multi-line definitions in this style? I think we should not, and only single-line expressions should be allowed. I think that's not the original purpose of this ticket and rather an incorrect usage of this new syntax, e.g., Updated by marcandre (Marc-Andre Lafortune) 10 months ago tldnr; I am against this proposal, it decreases readability, increases complexity and doesn't bring anything to the table (in that order) I read the thread carefully and couldn't find a statement of what is the intended gain. matz (Yukihiro Matsumoto), could you please explain what you see are the benefits of this new syntax (seriously)? We may find drawbacks in the future, but to find them, we need to experiment first. I see many drawbacks already. Also, it is already possible to experiment with tools like. Here's a list of drawbacks to start: 1) Higher cognitive load. Ruby is already quite complex, with many nuances. We can already define methods with def, define_method, define_singleton_method. 2) Requiring all text editors, IDE, parsers, linters, code highlighters to support this new syntax 3) Encouraging less readable code With this endless method definition, if someone want to know what a method does, one has to mentally parse the name and arguments, and find the = to see where the code actually starts. My feeling is that the argument signature is not nearly as important as the actual code of the method definition. def header_convert(name = nil, &converter) = header_fields_converter.add_converter(name, &converter) # ^^^^^^^^^^^^^^^^^^^^^^^^^^ all this needs to be scanned by the eye to get to the definition # ^ ^^^ the first `=` needs to be ignored, one has to grep for ") =" That method definition, even if very simple, deserves it's own line start, which makes it easy to locate. If Rubyists used this new form to write methods in two lines, with a return after the =, it is still harder to parse as someone has to get to the end of the line to locate that =. After a def the eye could be looking for an end statement. def header_convert(name = nil, &converter) = header_fields_converter.add_converter(name, &converter) # <<< hold on, is there a missing `end`? or is the indent wrong? Oh, no, the line ended with `=` def more_complex_method(...) # .... end I believe it is impossible to improve the readability of a one-line method as written currently. This new form can only make it harder to understand, not easier. def header_convert(name = nil, &converter) header_fields_converter.add_converter(name, &converter) end def more_complex_method(...) # ... end If header_convert ever need an extra line of code, there won't be a need to reformat the code either. For these reason I am against this proposal and my hope is that is reverted and that we concentrate on more meaningful improvements. Updated by Eregon (Benoit Daloze) 10 months ago Noteworthy is the current syntax in trunk is def name(*args) = expr (and not def: name), so there is no visual cue that this is a endless method definition except the = which comes very late. I agree with marcandre (Marc-Andre Lafortune), I think it makes code just less readable and harder to maintain. Updated by marcandre (Marc-Andre Lafortune) 10 months ago Eregon (Benoit Daloze) wrote in #note-23: Noteworthy is the current syntax in trunk is def name(*args) = expr(and not def: name), so there is no visual cue that this is a endless method definition except the =which comes very late. Oh, my mistake. That syntax makes it even less readable! There are potentially = already in the method definition... I'm even more against it 😅 I edited my post. Updated by zverok (Victor Shepelev) 10 months ago To add an "opinion perspective"... I, for one, even if not fascinated but at least intrigued by new syntax. My reasoning is as follows: - The most precious Ruby's quality for me is "expressiveness at the level of the single 'phrase' (expression)", and it is not "code golf"-expressiveness, but rather structuring language's consistency around the idea of building phrases with all related senses packed, while staying lucid about the intention (I believe that, as our "TIMTOWTDI" is opposite to Python's "there should be one obvious way...", this quality is also opposite to Pythons "sparse is better than dense") - (I am fully aware that nowadays I am in a minority with this viewpoint: whenever the similar discussions are raised, I see a huge difference between "it can be stated in one phrase" vs "it can not", while most of my correspondents find it negligible) - I believe that the difference of "how you write it when there is one statement" vs "...more than one statement" is intentional, and fruitful: "just add one more line to 10-line method" typically small addition, but "just add one more line to 1-line method" frequently makes one think: may be that's not what you really need to do, maybe data flow becomes unclear? One existing example: collection.map { |x| do_something_with(x) } # one line! cool! # ...what if I want to calculate several other values on the way? I NEED to restructure it: collection.map do |x| log_processing(x) total += x do_something_with(x) end # ...which MIGHT imply that "I am doing something wrong", and maybe what would clearer express intent # is separating calculation of total, and move `log_processing` inside `do_something` - So, it was actually always felt for me as "a bit too wordy" for trivial/short methods to write it as def some_method(foo) foo.bar(baz).then { |result| result + 3 } end - ...alongside with "nicer formatting" of empty lines between methods, and probably requirement to document every method, and whatnot, it means that extraction of trivial utility method feels like "making MORE code than necessary", so I'd really try how it would feel with def just_utility(foo)= foo.bar(baz).then { |result| result + 3 } def other_utility(foo)= "<#{self}(#{foo})>" def third_utility(foo)= log.print(nicely_formatted(foo)) - ...and I believe that "what if you need to add second statement, you will reformat the code completely?" is a good thing, not bad: you'll think twice before adding to "simple utility one-statement" method something that really, actually, maybe not belongs there. Updated by mame (Yusuke Endoh) 8 months ago In the previous dev-meeting, matz said that it should be prohibited to define setter method with endless definition. # prohibited def foo=(x) = @x = x There are two reasons: - This code is very confusing and it is not supposed that a destructive operation is defined in this style. - We want to allow def foo=42as def foo() = 42in future. It is difficult to implement it in short term, though. If def foo=(x) = @x = xis allowed once, it will become more difficult. I will create a PR soon. Updated by mame (Yusuke Endoh) 8 months ago Updated by pankajdoharey (Pankaj Doharey) 8 months ago Why do we even need a def. ? Why not just do it like Haskell? triple(x) = 3*x Since this is an assignment if triple hasn't already been assigned it should create a function otherwise it should pose a syntax error. Function redefinition error or something. or Perhaps a let keyword ? let double x = x * 2 Updated by etienne (Étienne Barrié) 4 months ago I haven't seen it mentioned in the comments so I just wanted to point out that the regular method-definition syntax doesn't require semicolons, and is very close to this experiment, given the parentheses are mandatory: def fib(x) x < 2 ? x : fib(x-1) + fib(x-2) end # one-line style def value() @value end # one-line style 3.0 def value() = @value # one-line style setter def value=(value) @value = value end # not supported in 3.0 def value=(value) = @value = value It doesn't remove the end which is the purpose of this feature sorry, but I like it because it keeps the consistency of having end, while not using more punctuation like ; or =. Edit: Also when we use endless methods, we need parentheses for calls made in the method, whereas the regular syntax does not # one-line style def value() process @value end # endless method needs parentheses def value() = process(@value) Also available in: Atom PDF
https://bugs.ruby-lang.org/issues/16746?tab=notes
CC-MAIN-2021-21
en
refinedweb
Questions around the testing topic come up quite often together with React Query, so I'll try to answer some of them here. I think one reason for that is that testing "smart" components (also called container components) is not the easiest thing to do. With the rise of hooks, this split has been largely deprecated. It is now encouraged to consume hooks directly where you need them rather than doing a mostly arbitrary split and drill props down. I think this is generally a very good improvement for colocation and code readability, but we now have more components that consume dependencies outside of "just props". They might useContext. They might useSelector. Or they might useQuery. Those components are technically no longer pure, because calling them in different environments leads to different results. When testing them, you need to carefully setup those surrounding environments to get things working. Mocking network requests Since React Query is an async server state management library, your components will likely make requests to a backend. When testing, this backend is not available to actually deliver data, and even if, you likely don't want to make your tests dependent on that. There are tons of articles out there on how to mock data with jest. You can mock your api client if you have one. You can mock fetch or axios directly. I can only second what Kent C. Dodds has written in his article Stop mocking fetch: Use mock service worker by @ApiMocking It can be your single source of truth when it comes to mocking your apis: - works in node for testing - supports REST and GraphQL - has a storybook addon so you can write stories for your components that useQuery - works in the browser for development purposes, and you'll still see the requests going out in the browser devtools - works with cypress, similar to fixtures With our network layer being taken care of, we can start talking about React Query specific things to keep an eye on: QueryClientProvider Whenever you use React Query, you need a QueryClientProvider and give it a queryClient - a vessel which holds the QueryCache. The cache will in turn hold the data of your queries. I prefer to give each test its own QueryClientProvider and create a new QueryClient for each test. That way, tests are completely isolated from each other. A different approach might be to clear the cache after each test, but I like to keep shared state between tests as minimal as possible. Otherwise, you might get unexpected and flaky results if you run your tests in parallel. For custom hooks If you are testing custom hooks, I'm quite certain you're using react-hooks-testing-library. It's the easiest thing there is to test hooks. With that library, we can wrap our hook in a wrapper, which is a React component to wrap the test component in when rendering. I think this is the perfect place to create the QueryClient, because it will be executed once per test: const createWrapper = () => { // ✅ creates a new QueryClient for each test const queryClient = new QueryClient() return ({ children }) => ( <QueryClientProvider client={queryClient}>{children}</QueryClientProvider> ) } test("my first test", async () => { const { result } = renderHook(() => useCustomHook(), { wrapper: createWrapper() }) } For components If you want to test a Component that uses a useQuery hook, you also need to wrap that Component in QueryClientProvider. A small wrapper around render from react-testing-library seems like a good choice. Have a look at how React Query does it internally for their tests. Turn off retries It's one of the most common "gotchas" with React Query and testing: The library defaults to three retries with exponential backoff, which means that your tests are likely to timeout if you want to test an erroneous query. The easiest way to turn retries off is, again, via the QueryClientProvider. Let's extend the above example: const createWrapper = () => { const queryClient = new QueryClient({ defaultOptions: { queries: { // ✅ turns retries off retry: false, }, }, }) return ({ children }) => ( <QueryClientProvider client={queryClient}>{children}</QueryClientProvider> ) } test("my first test", async () => { const { result } = renderHook(() => useCustomHook(), { wrapper: createWrapper() }) } This will set the defaults for all queries in the component tree to "no retries". It is important to know that this will only work if your actual useQuery has no explicit retries set. If you have a query that wants 5 retries, this will still take precedence, because defaults are only taken as a fallback. setQueryDefaults The best advice I can give you for this problem is: Don't set these options on useQuery directly. Try to use and override the defaults as much as possible, and if you really need to change something for specific queries, use queryClient.setQueryDefaults. So for example, instead of setting retry on useQuery: const queryClient = new QueryClient() function App() { return ( <QueryClientProvider client={queryClient}> <Example /> </QueryClientProvider> ) } function Example() { // 🚨 you cannot override this setting for tests! const queryInfo = useQuery('todos', fetchTodos, { retry: 5 }) } Set it like this: const queryClient = new QueryClient({ defaultOptions: { queries: { retry: 2, }, }, }) // ✅ only todos will retry 5 times queryClient.setQueryDefaults('todos', { retry: 5 }) function App() { return ( <QueryClientProvider client={queryClient}> <Example /> </QueryClientProvider> ) } Here, all queries will retry two times, only todos will retry five times, and I still have the option to turn it off for all queries in my tests 🙌. ReactQueryConfigProvider Of course, this only works for known query keys. Sometimes, you really want to set some configs on a subset of your component tree. In v2, React Query had a ReactQueryConfigProvider for that exact use-case. You can achieve the same thing in v3 with a couple of lines of codes: const ReactQueryConfigProvider = ({ children, defaultOptions }) => { const client = useQueryClient() const [newClient] = React.useState( () => new QueryClient({ queryCache: client.getQueryCache(), muationCache: client.getMutationCache(), defaultOptions, }) ) return <QueryClientProvider client={newClient}>{children}</QueryClientProvider> } You can see this in action in this codesandbox example. Always await the query Since React Query is async by nature, when running the hook, you won't immediately get a result. It usually will be in loading state and without data to check. The async utilities from react-hooks-testing-library offer a lot of ways to solve this problem. For the simplest case, we can just wait until the query has transitioned to success state: const createWrapper = () => { const queryClient = new QueryClient({ defaultOptions: { queries: { retry: false, }, }, }) return ({ children }) => ( <QueryClientProvider client={queryClient}>{children}</QueryClientProvider> ) } test("my first test", async () => { const { result, waitFor } = renderHook(() => useCustomHook(), { wrapper: createWrapper() }) // ✅ wait until the query has transitioned to success state await waitFor(() => result.current.isSuccess) expect(result.current.data).toBeDefined() } Silence the error console Per default, React Query prints errors to the console. I think this is quite disturbing during testing, because you'll see 🔴 in the console even though all tests are 🟢. React Query allows overwriting that default behaviour by setting a logger, so that's what I'm usually doing: import { setLogger } from 'react-query' setLogger({ log: console.log, warn: console.warn, // ✅ no more errors on the console error: () => {}, }) Putting it all together I've setup a quick repository where all of this comes nicely together: mock-service-worker, react-testing-library and the mentioned wrapper. It contains four tests - basic failure and success tests for custom hooks and components. Have a look here: That's it for today. Feel free to reach out to me on twitter if you have any questions, or just leave a comment below ⬇️ Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/tkdodo/testing-react-query-14cb
CC-MAIN-2021-21
en
refinedweb
42064/how-do-i-use-the-xor-operator-in-python-regular-expression $ to match the end of the ...READ MORE Hi. Good question! Well, just like what ...READ MORE If you're already normalizing the inputs to ...READ MORE What i found is that you can use ...READ MORE You can use np.maximum.reduceat: >>> _, idx = np.unique(g, ...READ MORE For Python 3, try doing this: import urllib.request, ...READ MORE You can also use the random library's ...READ MORE Syntax : list. count(value) Code: colors = ['red', 'green', ...READ MORE can you give an example using a ...READ MORE You can simply the built-in function in ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/42064/how-do-i-use-the-xor-operator-in-python-regular-expression
CC-MAIN-2021-21
en
refinedweb
I want to use Parse SDK with my android project in Eclipse, but I don't know how to do this because all the tutorials are for Android Studio and Gradle. How can I use this in Eclipse? It's not that hard: libsin your Eclipse project Add all the neccesary lines of code as described in Quick Start Guide. Obviously you'll need to add some imports: import com.parse.Parse; import com.parse.ParseInstallation; Hmmm, I think that was it - let me know if it works for you! Simply go to Parse.com and open your application's setting page. Then open 'Push' tab and toggle the 'Client push enabled?', More Info Here. See the image for clarity:
http://www.dlxedu.com/askdetail/3/275e952335db2d5238c996cc1df57c7d.html
CC-MAIN-2018-47
en
refinedweb
I am curious how functional languages compare (in general) to more "traditional" languages such as C# and Java for large programs. Does program flow become difficult to follow more quickly than if a non-functional language is used? Are there other issues or things to consider when writing a large software project using a functional language? Thanks! Does program flow become difficult to follow more quickly than if a >non-functional language is used? "Program flow" is probably the wrong concept to analyze a large functional program. Control flow can become baroque because there are higher-order functions, but these are generally easy to understand because there is rarely any shared mutable state to worry about, so you can just think about arguments and results. Certainly my experience is that I find it much easier to follow an aggressively functional program than an aggressively object-oriented program where parts of the implementation are smeared out over many classes. And I find it easier to follow a program written with higher-order functions than with dynamic dispatch. I also observe that my students, who are more representative of programmers as a whole, have difficulties with both inheritance and dynamic dispatch. They do not have comparable difficulties with higher-order functions. Are there other issues or things to consider when writing a large software project using a functional language? The critical thing is a good module system. Here is some commentary. The most powerful module system I know of the unit system of PLT Scheme designed by Matthew Flatt and Matthias Felleisen. This very powerful system unfortunately lacks static types, which I find a great aid to programming. The next most powerful system is the Standard ML module system. Unfortunately Standard ML, while very expressive, also permits a great many questionable constructs, so it is easy for an amateur to make a real mess. Also, many programmers find it difficult to use Standard ML modules effectively. The Objective Caml module system is very similar, but there are some differences which tend to mitigate the worst excesses of Standard ML. The languages are actually very similar, but the styles and idioms of Objective Caml make it significantly less likely that beginners will write insane programs. The least powerful/expressive module system for a functional langauge is the Haskell module system. This system has a grave defect that there are no explicit interfaces, so most of the cognitive benefit of having modules is lost. Another sad outcome is that while the Haskell module system gives users a hierarchical name space, use of this name space ( import qualified, in case you're an insider) is often deprecated, and many Haskell programmers write code as if everything were in one big, flat namespace. This practice amounts to abandoning another of the big benefits of modules. If I had to write a big system in a functional language and had to be sure that other people understood it, I'd probably pick Standard ML, and I'd establish very stringent programming conventions for use of the module system. (E.g., explicit signatures everywhere, opague ascription with :>, and no use of open anywhere, ever.) For me the simplicity of the Standard ML core language (as compared with OCaml) and the more functional nature of the Standard ML Basis Library (as compared with OCaml) are more valuable than the superior aspects of the OCaml module system. I've worked on just one really big Haskell program, and while I found (and continue to find) working in Haskell very enjoyable, I really missed not having explicit signatures. Do functional languages cope well with complexity? Some do. I've found ML modules and module types (both the Standard ML and Objective Caml) flavors invaluable tools for managing complexity, understanding complexity, and placing unbreachable firewalls between different parts of large programs. I have had less good experiences with Haskell Final note: these aren't really new issues. Decomposing systems into modules with separate interfaces checked by the compiler has been an issue in Ada, C, C++, CLU, Modula-3, and I'm sure many other languages. The main benefit of a system like Standard ML or Caml is the that you get explicit signatures and modular type checking (something that the C++ community is currently struggling with around templates and concepts). I suspect that these issues are timeless and are going to be important for any large system, no matter the language of implementation. Functional programming aims to reduce the complexity of large systems, by isolating each operation from others. When you program without side-effects, you know that you can look at each function individually - yes, understanding that one function may well involve understanding other functions too, but at least you know it won't interfere with some other piece of system state elsewhere. Of course this is assuming completely pure functional programming - which certainly isn't always the case. You can use more traditional languages in a functional way too, avoiding side-effects where possible. But the principle is an important one: avoiding side-effects leads to more maintainable, understandable and testable code. I'd say the opposite. It is easier to reason about programs written in functional languages due to the lack of side-effects. Usually it is not a matter of "functional" vs "procedural"; it is rather a matter of lazy evaluation. Lazy evaluation is when you can handle values without actually computing them yet; rather, the value is attached to an expression which should yield the value if it is needed. The main example of a language with lazy evaluation is Haskell. Lazy evaluation allows the definition and processing of conceptually infinite data structures, so this is quite cool, but it also makes it somewhat more difficult for a human programmer to closely follow, in his mind, the sequence of things which will really happen on his computer. For mostly historical reasons, most languages with lazy evaluation are "functional". I mean that these language have good syntaxic support for constructions which are typically functional. Without lazy evaluation, functional and procedural languages allow the expression of the same algorithms, with the same complexity and similar "readability". Functional languages tend to value "pure functions", i.e. functions which have no side-effect. Order of evaluation for pure function is irrelevant: in that sense, pure functions help the programmer in knowing what happens by simply flagging parts for which knowing what happens in what order is not important. But that is an indirect benefit and pure functions also appear in procedural languages. From what I can say, here are the key advantages of functional languages to cope with complexity : But the downside of those languages is that they lack support and experience in the industry. Having portability, performance and interoperability may be a real challenge where on other platform like Java, all of this seems obvious. That said, a language based on the JVM like Scala could be a really nice fit to benefit from both sides. Does program flow become difficult to follow more quickly than if a non-functional language is used? This may be the case, in that functional style encourages the programmer to prefer thinking in terms of abstract, logical transformations, mapping inputs to outputs. Thinking in terms of "program flow" presumes a sequential, stateful mode of operation--and while a functional program may have sequential state "under the hood", it usually isn't structured around that. The difference in perspective can be easily seen by comparing imperative vs. functional approaches to "process a collection of data". The former tends to use structured iteration, like a for or while loop, telling the program "do this sequence of tasks, then move to the next one and repeat, until done". The latter tends to use abstracted recursion, like a fold or map function, telling the program "here's a function to combine/transform elements--now use it". It isn't necessary to follow the recursive program flow through a function like map; because it's a stateless abstraction, it's sufficient to think in terms of what it means, not what it's doing. It's perhaps somewhat telling that the functional approach has been slowly creeping into non-functional languages--consider foreach loops, Python's list comprehensions...
http://m.dlxedu.com/m/askdetail/3/021a9c9910e9bcab8ee8ab30496f0d2c.html
CC-MAIN-2018-47
en
refinedweb
We had a lengthy discussion recently during code review whether scala.Option.fold() is idiomatic and clever or maybe unreadable and tricky? Let’s first describe what the problem is. Option.fold does two things: maps a function f over Option‘s value (if any) or returns an alternative alt if it’s absent. Using simple pattern matching we can implement it as follows: val option: Option[T] = //... def alt: R = //... def f(in: T): R = //... val x: R = option match { case Some(v) => f(v) case None => alt } If you prefer one-liner, fold is actually a combination of map and getOrElse: val x: R = option map f getOrElse alt Or, if you are a C programmer that still wants to write in C, but using Scala compiler: val x: R = if (option.isDefined) f(option.get) else alt Interestingly function; Actually cata has some theoretical background, Option.fold just sounds like a random name collision that doesn’t bring anything to the table, apart from confusion. I know what you’ll say, that TraversableOnce has fold value! This breaks the consistency, especially when realizing Option.foldLeft() and Option.foldRight() have correct contract (but it doesn’t mean they are more readable). The only way to understand folding over option is to imagine Option as a sequence with 0 or 1 elements. Then it sort of makes sense, right? No. def double(x: Int) = x * 2 Some(21).fold(-1)(double) //OK: 42 None.fold(-1)(double) //OK: -1 but: typeclass describes various flavours of folding in Haskell. There are familiar foldl/ foldr/ foldl1/ foldr1, in Scala named foldLeft/ foldRight/ reduceLeft/ reduceRight accordingly. While other folds are quite complex, this one barely takes a foldable container of ms (which have to be Monoids) and returns the same Monoid type. A quick recap: a type can be a Monoid if for any x) and under multiplication with neutral 1 ( x * 1 == 1 * x == x for is the simplest: the program folds over elements in list and concatenates them because concatenation is the operation defined in Monoid String typeclass instance. Back to options (or more precisely Maybe). Folding over Maybe monad having type parameter being Monoid (I can’t believe I just said it) has an interesting interpretation: it either returns value inside Maybe or a default Monoid value: > fold (Just "abc") "abc" > fold Nothing :: String "" Just "abc" is same as Some("abc") in Scala. You can see here that if Maybe String is Nothing, neutral String monoid value is returned, that is an empty string. Summary Haskell shows that folding (also over Maybe) can be at least consistent. In Scala Option.fold is unrelated to List.fold, confusing and unreadable. I advise avoiding it and staying with slightly more verbose map/ getOrElse transformations or pattern matching. PS: Did I mention there is also Either.fold() (with even different contract) but no Try.fold()?
http://www.javacodegeeks.com/2014/06/option-fold-considered-unreadable.html
CC-MAIN-2015-40
en
refinedweb
An instance of the Response class represents the data to be sent in response to a web request. Response is provided by the google.appengine.ext.webapp module. - Introduction - Response() - Class methods: - Instance variables: - Instance methods: Introduction When the webapp framework calls a request handler method, the handler instance's response member is initialized with an empty Response instance. The handler method prepares the response by manipulating the Response instance, such as by writing body data to the out member or setting headers on the headers member. import datetime from google.appengine.ext import webapp class MyRequestHandler(webapp.RequestHandler): def get(self): self.response.out.write("<html><body>") self.response.out.write("<p>Welcome to the Internet!</p>") self.response.out.write("</body></html>") expires_date = datetime.datetime.utcnow() + datetime.timedelta(365) expires_str = expires_date.strftime("%d %b %Y %H:%M:%S GMT") self.response.headers.add_header("Expires", expires_str) webapp sends the response when the handler method returns. The content of the response is the final state of the Response object when the method returns. Note: Manipulating the object in the handler method does not communicate any data to the user. In particular, this means that webapp cannot send data to the browser then perform additional logic, as in a streaming application. (App Engine applications cannot stream data to the browser, with or without webapp.) By default, responses use a HTTP status code of 200 ("OK"). To change the status code, the application uses the set_status() method. See also the RequestHandler object's error() method for a convenient way to set error codes. If the response does not specify a character set in the Content-Type header, the character set for the response is set to UTF-8 automatically. Constructor The constructor of the Response class is defined as followed: - class Response() An outgoing response. Typically, the WSGIApplication instantiates a RequestHandler and initializes it with a Response object with default values. Class Methods The Response class provides the following class methods: - Response.http_status_message(code) Returns the default HTTP status message for a given HTTP status code. Arguments - code - The HTTP status code. Instance Variables An instance of the Response class has the following variable members: - out An instance of the StringIO class that contains the body text of the response. The contents of this object are sent as the body of the response when the request handler method returns. - headers An instance of the wsgiref.headers.Headers class that contains the headers of the response. The contents of this object are sent as the headers of the response when the request handler method returns. For security reasons, some response headers cannot be modified by the application. See Responses. Instance Methods An instance of the Response class has the following methods: - set_status(code,message=None) Sets the HTTP status code, and optional message for the response. Arguments - code - The HTTP status code to use for the response. - The HTTP status message to use for the response. If none is given, use the default from the HTTP/1.1 specification. - clear() Clears all data written to the output stream (out). - wsgi_write(start_response) Writes the response using WSGI semantics. Typically, an application does not call this directly. Instead, webapp calls this to write the response when the request handler method returns. Arguments - start_response - A WSGI-compatible start_responsefunction.
https://cloud.google.com/appengine/docs/python/tools/webapp/responseclass
CC-MAIN-2015-40
en
refinedweb
I am confused...XX..... But how can i add element into this??? like above?? ..... sorry This is a discussion on MAZE Problem within the C++ Programming forums, part of the General Programming Boards category; I am confused...XX..... But how can i add element into this??? like above?? ..... sorry... I am confused...XX..... But how can i add element into this??? like above?? ..... sorry i think i figure out now hahah thx thx Then you need the first array; copy and paste the whole thing in fromThen you need the first array; copy and paste the whole thing in fromCode:{ {{0}}, . And there you are.. And there you are.Code:{{ lots of numbers up to}} } these should make sense?these should make sense?Code:int matrix[MATRIX_SIZE][MATRIX,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0}, {1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0}, {0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1]; What i need to write in the int main() ? i think i need to allow the user input and read the data from the array and output. I am trying to produce the code, but it comes more than 15 errors. Wt else do u need to modify???? -----------THX---THX------------- I have put these into the int main() What else is missing? Or i need to define other function??I have put these into the int main() What else is missing? Or i need to define other function??Code:int main() { cout <<"Ender Maze wanted, 0,1,2,3:\n"; cout <<" 0- Random Maze\n"; cout <<" 1- Set Maze (1)\n"; cout <<" 2- Exit\n"; cin >> x; return 0; plz help cheers OIC This is what i though coz they dun read the input and get the output. so i need to create a function for the data to process!? So i need specific 0 is to choose the map [0] and 1 i choose map[1]?? for(int maze=0; maze<3 ; maze++) cout<<matrix_size[maze]<<"------"<<matrix_data[n]<<endl; these should put inside the int main() or void()?? I am really sorry for all these...... You can do it inside main directly, or put it another function, as you wish; but if you put it in another function you do have to call that function from main, or else it won't happen. Notice that there is no function called void(). Sorry So i think i will put them inside the main... I have seen the other tread you reply using printf and scanf are they more useful for me?? (I know it sounds weired) I will use that method to see does it get me anywhere.Thanks I am sorry for all these question, but i really trying to get them work, I have not stopping on these since you repled me this morning ahahaha. ---------Thanks God------------ I have done the following changes in the int main() still 3 errors, any 1 can point out wt's going on????still 3 errors, any 1 can point out wt's going on????Code:int main() using namespace std; { int maze; cout <<"Enter Maze wanted, 0,1,2,3:\n"; cout <<" 0- Random Maze\n"; cout <<" 1- Set Maze (1)\n"; cout <<" 2- Exit\n"; cin >> maze; { for (maze =0; maze < matrix_data[n]; maze++) cout<<matrix_data[n] <<endl; } Thanks Thanks The three errors tell you exactly what's going on: unclosed brace on line "{" (right after cin >> maze) variable n not defined unable to compare int (maze) with int[][] (matrix_data[n]) (although your error might say something about conversions from int[][] to int) Not a compile error, probably, but cout << matrix_data[n] will probably print out a hex memory address, since matrix_data[n] decays into a pointer. Still getting syntax error Code:int main() using namespace std; { int maze[][], n; cout <<" Enter Maze wanted, 0,1,2:\n"; cout <<" 0- Random Maze\n"; cout <<" 1- Set Maze (1)\n"; cout <<" 2- Exit\n"; cin >> maze[][]; matrix_data[n] =maze[][]; { for (maze[][] =0; maze[][] < matrix_data[n]; maze[][]++) cout<<matrix_data[n] <<endl; printf("%d\n", matrix_data[n]); return 0; } } i am also having error on this line...., also wt this mean? "unable to compare int (maze) with int[][] (matrix_data[n]) " This is the end of it....This is the end of it....Code:. . . . ]; void output_program(struct prog *progp) { char *codep = progp->code; int token; while (token = *codep++) fputs(token_table[token].name, stdout); putchar('\n'); } void print_matrix(void) { int i, j; char symbs[] = " OX"; for (i=0; i < MATRIX_SIZE; i++) { for (j=0; j < MATRIX_SIZE; j++) putchar(symbs[trace_matrix[i][j]]); putchar('\n'); } } So aboe must be wring...So aboe must be wring...Code:for (maze[][] =0; maze[][] << matrix_data[n]; maze[][]++) If i can't use integer to a matrix,,,, so how can i retrieve the data? Thanks Thanks Notice in your initializer that you've embedded the semicolon in the thing, rather than being at the end like it should. This is so amazingly wrong that it makes me give up hope.This is so amazingly wrong that it makes me give up hope.Code:for (maze[][] =0; maze[][] << matrix_data[n]; maze[][]++) matrix_data is a 3-dimensional array. matrix_data[0] is a 2-dimensional array (namely, a map). matrix_data[0][1] is a 1-dimensional array (specifically, the second row of the map). matrix_data[0][1][2] is an integer (the third number in the second row of the map). You can only deal with specific elements of the array.
http://cboard.cprogramming.com/cplusplus-programming/100918-maze-problem-4.html
CC-MAIN-2015-40
en
refinedweb
.execchain; 29 30 import java.io.InterruptedIOException; 31 32 import org.apache.http.annotation.Immutable; 33 34 /** 35 * Signals that the request has been aborted. 36 * 37 * @since 4.3 38 */ 39 @Immutable 40 public class RequestAbortedException extends InterruptedIOException { 41 42 private static final long serialVersionUID = 4973849966012490112L; 43 44 public RequestAbortedException(final String message) { 45 super(message); 46 } 47 48 public RequestAbortedException(final String message, final Throwable cause) { 49 super(message); 50 if (cause != null) { 51 initCause(cause); 52 } 53 } 54 55 }
http://hc.apache.org/httpcomponents-client-4.3.x/httpclient/xref/org/apache/http/impl/execchain/RequestAbortedException.html
CC-MAIN-2015-40
en
refinedweb
hello, when i compile, there is no error. however when i run it, it shows this message "Exception in thred "main" java.lang.NoSuchMethodError: main Press any key to continue..." any idea? my program is related to polymorphism via inheritance. which separate into 2 files, StaffMember.java & human.java the Volunteer progrom allows user to enter their details and it will link and display according to the program in StaffMember class. below are my code: Code java: //StaffMember.java import java.util.*; import java.text.*; public class StaffMember { protected String name; protected String address; protected String phone; public Staffmember(String eName,String eAddress,String ePhone) {name=eName; address=eAddress; phone=ePhone;} public String toString() { String result="Name :" +name+"\n"; result+="Address:"+address+"\n"; result+="Phone:"+phone; return result; } //human.java import java.util.*; import java.text.*; public class human extends Staffmember { public human(String eName,String eAddress, String ePhone) { super(eName,eAddress,ePhone); } } the software im using is jcreator, however i not manage to display nor enter details, its only show "Exception in thred "main" java.lang.NoSuchMethodError: main Press any key to continue..."
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/9145-i-dont-know-why-i-cant-see-output-nor-enter-input-printingthethread.html
CC-MAIN-2015-40
en
refinedweb
This paper outlines the steps to migrate an ADF 10.1.3 JSF application using ADF Faces components to JDeveloper 11g. Using the SRDemo Sample application as a hands-on example, it explains the automatic migration support and leads you step-by-step through the temporary post-migration steps required in the production release due to known issues. 1 Introduction For the JDeveloper 11g production release, our goal is to support migration of JDeveloper/ADF 10.1.3 applications to the 11g release with minimal-to-no manual changes. In this JDeveloper 11g release, while migration is mostly automated, a few known migration issues with ADF Faces-based 10.1.3 applications are listed in the release notes. To assist you in better understanding these issues, this paper walks step-by-step through the migration of the ADFBC version of the familiar 10.1.3 SRDemo sample application. We hope that this information will make it easier for you to try migrating your own 10.1.3 ADF Faces applications to help us find additional migration problems that need fixing by before our JDeveloper 11g production release. 2 Overview of Automated Migration Support JDeveloper 11g supports automated migration of workspaces from 10.1.2.x and 10.1.3.x releases. This section describes the supported migration paths and provides more specific migration details about the automated migration of 10.1.3 JSF applications which use ADF Faces components like the SRDemo sample does. 2.1 Supported Migration Paths After making a backup copy of your existing JDeveloper 10.1.2.x or 10.1.3.x workspace, you can open the workspace in JDeveloper 11.1.1.0.0 release. All necessary files will undergo migration at this time, in some cases after acknowledging some choices in a Migration Wizard. In general, you should take the default choices for migrating your application. If direct migration of a JDeveloper 10.1.2 workspace fails for any reason, try again to first migrate the 10.1.2 workspace to 10.1.3.3, then to migrate the 10.1.3.3 workspace to 11g to see if the migration is more successful. 2.2 Changes Made to All ADF Applications During the Migration For all ADF applications, the automated migration performs the following changes: .adf/META-INFdirectory containing adf-config.xml, connections.xml, and credential-jazn-data.xmlconfiguration files. DATABASE1to hold any offline database schema from the 10.1.3 project. importstatements referencing JDK 1.8-style Collections API to use the compatible java.utilCollections API 2.3 Additional Changes Made to Web Applications In addition, if the application you migrate is a web application, the migration performs the following additional changes: WebLogic-application.xmldeployment descriptor 2.4 Further Changes Made to JSF Applications Using ADF Faces 10.1.3 Components If the web application you migrate uses JavaServer Faces with the Oracle ADF Faces component library, then you will want to migrate it to use the Apache Trinidad [ 1] components in 11g. This ensures you will continue to have visual WYSIWYG design time support for page design, since JDeveloper 11g does not provide for WYSIWYG support for working with 10.1.3 ADF Faces component library. The Apache Trinidad components are an open source JSF component library based on Oracle's donation of ADF Faces 10.1.3 components. Understandably, the Apache Trinidad components are therefore very similar to the ADF Faces components on which they were based. However, they are a distinct component library with a distinct identifying namespace and in some cases different names for the UI components. JDeveloper 11g automatically updates your 10.1.3 ADF Faces application pages to use the corresponding Apache Trinidad components during automatic migration. Specifically, the migration performs the following additional changes: web.xmlconfiguration information to appropriate versions to work with Apache Trinidad importstatements in custom Java code referencing ADF Faces 10.1.3 API's to the corresponding classes from the Apache Trinidad library. 3 Performing the Automated Migration on the SRDemo Sample To gain experience in migrating an ADF 10.1.3 web application using JSF and ADF Faces, we'll use the familiar SRDemo Sample application (ADF BC Version). Perform the following steps to carry out the migration: Download and Install the Studio Editon of JDeveloper 11g Please download the Studio Edition of JDeveloper 11g from the JDeveloper product center on OTN [ 3] to follow along with the remaining steps. Ensure You Have Installed the JUnit Extension Before Migrating The SRDemo sample (ADFBC Version) uses JUnit for unit testing. You need to ensure that you have installed the JUnit extension before performing the migration. To do this, use the Help | Check for Updates... and install the appropriate JUnit extension for the JDeveloper 11g release. The extension will be named JUnit Integration 11.1.1.0.XX.YY.ZZ . Download and Extract the Original 10.1.3 SRDemo Sample (ADFBC Version) To ensure you can follow these instructions exactly, we recommend that you download the original 10.1.3 SRDemo Sample (ADFBC Version) [ 4] to use as the starting point for migration. Once downloaded, unzip the SRDemoSampleADFBC.zip file to a convenient directory using jar, unzip, WinZip or another zip file utility. It will create a top-level SRDemoSampleADFBC directory containing all of the files in the demo. Startup JDeveloper 11g and Open the SRDemoSampleADFBC Workspace Startup JDeveloper 11g and choose File | Open... . Select the SRDemoSampleADFBC.jws file in the SRDemoSampleADFBC directory you extracted above. The Migration Wizard will automatically appear. Migrate the Application Using the Migration Wizard Dialog To start the migration process, in the Migration Wizard dialog, do the following: On the Finish page, click (Finish) The actual migration may take several minutes. The Migration Progress dialog gives you feedback about what is occurring. Finish Migration and Save All the Migrated Files At last, when the final Migration Progress dialog appears summarizing the projects that were migrated, click (OK) . Then, choose File | Save All to save all of the migrated workspace files. 4 Manual Changes Required to ADF BC Configurations Using Datasources After Migration If any of your application module configurations use a JDBC datasource, as the SRDemo sample's SRServiceLocal configuration does, then you need to update those configurations to ensure that the jbo.server.internal_connection is set to the name of an existing datasource. In JDeveloper 11g, when you create an application resources connection named SRDemo, by default the IDE will automatically generate a WebLogic JDBC module for that connection whenever you run or debug the application on the integrated WebLogic application server. As part of that JDBC module's settings, it will configure its JNDI name to be jdbc/SRDemoDS. In addition, JDeveloper 11g will add an entry into the application's web.xml file to reference the datasource as an application resource. The bottom line is that your application can lookup the datasource using the JNDI name under the application resources JNDI namespace. Specifically, the correct JNDI name to use for the jdbc/SRDemoDS datasource will be java:comp/env/jdbc/SRDemoDS. This is the same string that the SRServiceLocal configuration is already using for its datasource, so we can leave that as it is. The one property in this configuration that needs to be changed is the jbo.server.internal_connection property, since it refers to a JNDI name java:comp/env/jdbc/SRDemoCoreDS (with the additional Core in the name) which no longer exists. So, to perform this update do the following: SRServiceapplication module in the DataModelproject in the Application Navigator and choose Configurations... SRServiceLocalconfiguration in the list at the left of the dialog, and click (Edit...) jbo.server.internal_connectionproperty and update its value to read java:comp/env/jdbc/SRDemoDS 5 Manual Changes Required to ADF Faces Code After Migration Start by rebuilding the application to identify the few places we'll need to change ADF Faces API code. In this preview release, these require manual resolution. To rebuild the application, do the following: You should encounter four compilation errors in the UserInterface project due to small changes in the Apache Trinidad core components APIs as compared with those supported by the ADF Faces 10.1.3 core components: In the oracle.srdemo.view.backing.SRMain class: Error(87,23): method setTip(null) not found in class org.apache.myfaces.trinidad.component.core.input.CoreSelectBooleanCheckbox Error(137,36): method getSelectionState() not found in class org.apache.myfaces.trinidad.component.core.data.CoreTable In the oracle.srdemo.view.menu.MenuItem class: Error(19,38): identifier CoreCommandMenuItem not found In the oracle.srdemo.view.menu.TrainModelAdapter class: Error(23,24): constructor ProcessMenuModel(Object, String, Object) not found in class org.apache.myfaces.trinidad.model.ProcessMenuModel Perform the following changes to resolve these compilation errors: Fix Compilation Errors in SRMain.java On line 87 of SRMain.java, change: getConfidential().setTip(null); to this: getConfidential().setShortDesc(null); On line 137, change: Set keySet = getHistoryTable().getSelectionState().getKeySet(); to this: Set keySet = getHistoryTable().getSelectedRowKeys(); Fix Compilation Errors in MenuItem.java On line 19 of MenuItem.java, change: private String _type = CoreCommandMenuItem.TYPE_DEFAULT; to: private String _type = "default"; Fix Compilation Errors in TrainModelAdapter.java On line 25 of TrainModelAdapter.java, change: getMaxPathKey()); to: (String)getMaxPathKey()); Rebuild the Application Finally, rebuild the application to ensure all of the compilation errors have been resolved. To do this, perform these steps: SRDemoSampleADFBCworkspace is the current workspace. The application should now be free of any compilation errors. You can ignore the several deprecation warnings. 6 Adding Helper Code to Map Trinidad Table Keys to ADF Row Keys The SRDemo sample's SRMain.jspx page contains a UI table of service history rows. This table is configured to allow multiple-selection. The (Delete Service History Record) button on this page is configured to invoke an ADF method action binding named deleteServiceHistoryNotes. This action binding invokes the method of the same name on the SRService application module, passing in a java.util.Set of oracle.jbo.Key objects as a parameter. The deleteServiceHistoryNotes action binding's rowKey parameter contains the EL expression ${backing_SRMain.historyTable.selectionState.keySet} which passes the set of currently selected keys in the table. In 10.1.3, the ADF Faces CoreTable class' selectionState.keySet property returns a Set of selected row keys. In Apache Trinidad, the corresponding CoreTable class' selectedRowKeys property also returns a Set of row keys. However, under the covers the two differ in implementation in that the ADF Faces implementation returns a Set or oracle.jbo.Key objects, while the Trinidad implementation returns a Set of String keys. The SRDemo sample — arguably incorrectly — takes advantage of the implementation detail that the ADF Faces CoreTable returned a Set of oracle.jbo.Key objects to pass this directly to the application module method. So, upon migrating to Trinidad, we need to add a helper method to map the Set of CoreTable row keys into a Set of oracle.jbo.Key objects. To do so, find the OnPageLoadBackingBeanBase class in the UserInterface project and add the indicated helper code below. This helper code will allow us to reference a Map-valued property named selectedAdfRowKeys which will return a Set<Key> when passed an instance of a CoreTable as a map key. In a later step of this document, we'll make a corresponding change to the page definition of the SRMain.jspx page to adjust to this new syntax. Add the lines of code between the two comments below into the OnPageLoadBackingBeanBase class as shown here: public class OnPageLoadBackingBeanBase implements PagePhaseListener { private BindingContainer bc = null; // ----------- ADD THE LINES BELOW ---------- private Map<CoreTable,Set<Key>> selectedAdfRowKeys = new HashMap(){ public Object get(Object key) { if (key instanceof CoreTable) { return getSelectedAdfRowKeys((CoreTable)key); } throw new RuntimeException("selectedAdfRowKeys[] key not a Core Table!"); } }; public Map<CoreTable,Set<Key>> getSelectedAdfRowKeys() { return selectedAdfRowKeys; } public Set<Key> getSelectedAdfRowKeys(CoreTable table) { Set<Key> retVal = new HashSet<Key>(); for (Object opaqueFacesRowKey : table.getSelectedRowKeys()) { table.setRowKey(opaqueFacesRowKey); JUCtrlValueBindingRef rowData = (JUCtrlValueBindingRef)table.getRowData(); retVal.add(rowData.getRow().getKey()); } return retVal; } // ----------- ADD THE LINES ABOVE ---------- /* * ... etc. ... Rest of existing class here */ } To compile this newly-added code, you'll also need to add the following additional import statements to the top of the file (in the same section as the existing import statements). By default the import section will appear collapsed in the code editor. Click the plus sign in the margin at the left to expand it. import java.util.HashMap; import java.util.HashSet; import java.util.Map; import java.util.Set; import oracle.jbo.Key; import oracle.jbo.uicli.binding.JUCtrlValueBindingRef; import org.apache.myfaces.trinidad.component.core.data.CoreTable; 7 Manual Changes Required to JSF Pages After Migration In this release, a small number of manual changes are required to your JSF pages after migration due to known issues. Perform the following steps: Add Whitespace After Colon in EL Expressions Using Ternary Operation to Avoid JSF 1.2 EL Runtime Error The JSF 1.2 Expression Language (EL) evaluator currently throws an ELException at runtime trying to parse the : (colon) symbol in ternary expressions if it is immediately followed by an alphabetic character. The easiest way to workaround this issue is to search for such occurrences in your pages and add whitespace around the : in the ternary expression. The SRDemo encounters this problem in only one of its pages. [Bug 6408848] Open the ./app/staff/SRSearch.jspx page and select the Source tab at the bottom of the editor. Line 94 contains the consecutive characters :row as shown here. <tr:outputText value="#{row.AssignedDate eq null?res['srsearch.highlightUnassigned']:row.AssignedDate}" ... The colon needs to be separated from the identifier row by whitespace to avoid the error. So change the EL expression to have whitespace after the : like this: <tr:outputText value="#{row.AssignedDate eq null?res['srsearch.highlightUnassigned']: row.AssignedDate}" ... Add Immediate="true" To Buttons Bound to Delete Actions If Not Present As a best practice, a JSF command component like a "Cancel" or "Rollback" button should have its immediate property set to true to avoid posting any user data in the current form before carrying out the operation. The SRDemo's SRMain.jspx page inadvertently did not correctly follow this best practice for the (Cancel) button in the "Add Note" panel in the page. Due to an ADF Faces 10.1.3 issue [Bug 5918302] now fixed in 11g, this best-practice setting of the immediate property went unnoticed. Here's why. In JDeveloper 10.1.3 with ADF Faces, when client-side mandatory attribute enforcement is not in use, a form submitted with empty values for server-side mandatory fields is not correctly validated if the user has not changed the data of any field in the form. Due to this incorrect ADF Faces behavior, the 10.1.3 SRDemo's (Cancel) button worked as expected since no validation was triggered in this case. In 11g, the validation is correctly triggered now so we need to correct the situation by adding the immediate="true" property to this (Cancel) button. To carry out this correction, do the following: SRMain.jspxand select the Source tab to see the page source <tr:panelBox>tag with id="addNotesPanel" Inside this panel, find the <tr:commandButton> for the cancel operation. It will look like this: <tr:commandButton Add the immediate="true" attribute so that it looks like this: <tr:commandButton 8 Manual Changes Required to Page Definitions After Migration Perform the following changes: Adjust Reference to Table SelectionState The ADF Faces table has a property named selectionState whose nested property keySet returns a Set of keys for the selected rows. The Apache Trinidad table has a property named selectedRowKeys that similarly returns a Set of keys for the selected rows. In an earlier step of this document, we added in some helper code into the OnPageLoadBackingBeanBase class to allow declaratively mapping the Set of Trinidad table keys to a corresponding Set of oracle.jbo.Key objects. We need to change the reference to selectionState.keySet in the SRMain.jspx page's page definition to use the this helper code. To perform this, do the following: ./app/SRMain.jspxin the Application Navigator and choose Go to Page Definition . deleteServiceHistoryNotesaction binding node in bindings section of the Structure Window keySetparameter child node inside the deleteServiceHistoryNotesaction binding NDValueproperty value expression from ${backing_SRMain.historyTable.selectionState.keySet}to ${backing_SRMain.selectedAdfRowKeys[backing_SRMain.historyTable]} 9 Replace adfFacesContext References with Trinidad requestContext In this release, the migration wizard automates substituting references to adfFacesContext by the Trinidad equivalent requestContext in pages and page definitions, however if you have referenced adfFacesContext in Java code, you need to make this subsitution manually. So we need to use JDeveloper's search/replace in files functionality to accomplish this manually. Perform these steps: UserInterfaceproject in the Application Navigator Select Search | Replace in Files... from the main menu. The Replace in Files dialog appears. Enter adfFacesContext in the Search Text field Enter requestContext in the Replace With field Ensure the Options are set as follows: [ x ]Match Case [ x ]Recurse into Directory [ ]Match Whole Word Only [ ]Preview Change 10 Installing the SRDEMO Schema If Necessary The SRDemo schema is defined in a series of SQL scripts stored in the SRDemoSampleADFBC/DatabaseSchema/scripts directory. If you need to create the SRDEMO schema and the demo tables, then open a command shell window and do the following: ./SRDemoSampleADFBC/DatabaseSchema/scripts If you need to install against a remote database... If you are on Windows... set LOCAL= tnsname_of_remote_db If you are on Unix... setenv TWO_TASK tnsname_of_remote_db As user SYSTEM, run the build.sql script: sqlplus system/manager @build.sql As user SRDEMO, run the createContextPackage.sql script and the createContextPackageBody.sql script. sqlplus srdemo/ORACLE @createContextPackage.sql Package created. SQL> @createContextPackageBody.sql Package body created. SQL> exit 11 Adjust the Application Connection Properties If Necessary The application migration will automatically create an application resource database connection for the ADF Business Components project, based on the information found in the *.jpx and *.xcfg configuration files being migrated. For the SRDemo migration in particular, an application resources connection named SRDemo is created. If you are not running the demo against a local Oracle database whose SID is named ORCL and whose TNS listener process is listening on the default port of 1521, then you'll need to adjust the properties of this connection to point to where you have created the SRDEMO schema you want to use. To do this, expand the Application Resources zone of the Application Navigator, expand the Connection folder and its contained Database node to select Properties... on the right-mouse menu of the SRDemo connection. Review the connection settings and change them if necessary to point to the SRDEMO account you want to run against. Use the (Test Connection) button to ensure that you can successfully connect, then press (OK) 12 Migrate the Security For any application using JAAS security like the SRDemo Sample, when you migrate to 11g you must migrate its security settings. Most of the work is automated by running a wizard. Perform these steps: Run the ADF Security Wizard Select the UserInterface project in the Application Navigator and then choose Application | Secure > Configure ADF Security... from the main menu. On the Select authentication type page, notice that Form-Based Authentication is automatically selected and that the infrastructure/SRLogin.jspx page is inferred for both the Login Page and the Error Page . Add a leading slash in front of both of the file names so that they read: Finally, acknowledge the Security Infrastructure Created dialog by clicking (OK) . Remove ConfigurationData from Three Project's Source Path Three projects in the SRDemo shared security information from a separate directory named ConfigurationData. After performing the security migration above, this is no longer needed so must be removed. Perform the following steps: Remove the ConfigurationData directory from UserInterface Project Double-click the UserInterface project in the Application Navigator. On the Project Source Paths page of the Project Properties dialog, in the Java Source Paths: list box, select the entry with ConfigurationData in the name and click (Remove) , then click (OK) . Remove the ConfigurationData directory from UnitTests Project Double-click the UnitTests project in the Application Navigator. On the Project Source Paths page of the Project Properties dialog, in the Java Source Paths: list box, select the entry with ConfigurationData in the name and click (Remove) , then click (OK) . Remove the ConfigurationData directory from DataModel Project Double-click the DataModel project in the Application Navigator. On the Project Source Paths page of the Project Properties dialog, in the Java Source Paths: list box, select the entry with ConfigurationData in the name and click (Remove) , then click (OK) . Copy Previous jazn-data.xml Entries to New Location To migrate the jazn-data.xml entries being used by the SRDemo in the ConfigurationData directory to the new jazn-data.xml file created by the Configure ADF Security wizard above, do the following: Open the New jazn-data.xml File in JDeveloper Expand the Application Resources accordion panel of the Application Navigator and expand the Descriptors > META-INF folders. Double-click on the jazn-data.xml file you find there. Select the Source tab to view the source of this file. Remove the jazn-realm element and all its contents Select the text beginning with the <jazn-realm> element and ending with (and including) the matching </jazn-realm> element, and delete the text. Copy Previous jazn-data.xml Entries to the Clipboard Using a text editor like Notepad, open the ./SRDemoSampleADFBC/ConfigurationData/src/META-INF/jazn-data.xml file in a text editor, and copy all of the lines in the file beginning with the <jazn-realm> element and ending with (and including) the matching </jazn-realm> element, and copy them to the clipboard. Paste Clipboard into New jazn-data.xml File Back in JDeveloper, in the new jazn-data.xml file, position your cursor between the <jazn-data> opening tag and </jazn-data> tags, before the <policy-store> tag, and paste the contents of the clipboard into the file. Then save all your changes. Delete the ConfigurationData Directory After exiting the editor you opened above to copy the previous jazn-data.xml contents, at this point, you can remove the ConfigurationData directory to make it more clear that the security information is no longer coming from the files in that directory. Ensure Each User's Credentials Are At Least 8 Characters Long Oracle WebLogic server requires that the credential (i.e. password) of each user in the jazn-data.xml file be at least 8 characters long. The password for each user in the SRDemo's jazn-data.xml file was welcome, which is only seven characters long, so we need to change each users password to welcome1 to avoid a deployment exception at runtime. To accomplish this, do the following: jazn-data.xmlfile still open in the editor, select the Overview tab. ahunold. welcome1 You can delete the two users whose names begin with DataBase_User_* Note that you may have to update their passwords to have at least 8 characters before you're allowed to delete them. 13 Clean the Application Compiled Artifacts Before running the JUnit tests and the web application in the subsequent sections, first clean the application to remove any existing classes and XML files from the classpath to ensure you are only seeing the latest changes. To clean the application, select the SRDemoSampleADFBC workspace in the Application Navigator by clicking on the workspace list at the top. Then select Build | Clean All from the main menu. Acknowledge the alert if it appears. 14 Update Tests and Help Text to Reflect New 8-Character Passwords First, since we modified the password of every user from welcome to welcome1, we need to change the password used by the unit tests as well. In addition, the password appears on the login page as help text to remind users running the demo what the password of every use is. We'll update that help text as well. To update the welcome password to be welcome1 in the unit tests, do the following: SRServiceTestAsManagerRoleclass in the UnitTestsproject (in the oracle.srdemo.tests.model.unittestspackage. welcome1 SRServiceTestAsTechnicianRoleand SRServiceTestAsUserRoleclass as well. Next, update the help text for the web application to reflect the change in password. To perform this change, do the following: UserInterfaceproject UIResources.propertiesfile to find it in the list, then press Enter to open the file to edit <b>welcomeinto the search field at the top of the code editor to find the translatable string containing the old welcomepassword. welcometo be welcome1instead. UIResources_it.propertiesfile. 15 Run the JUnit Regression Tests To ensure everything is working correctly, run the JUnit regression tests. To do this, right-click the SRServiceAllTests.java class in the UnitTests project in the Application Navigator and choose Run . The JUnit Test Runner log window should 12 unit tests passing 100%. If all of the tests failed, either you failed to clean the compiled application artifacts as described in the previous section, or you likely need to adjust the connection properties for the SRDemo application resource connection as described in the previous section. If you are using the Oracle 11g database, remember the password is now case-sensitive so you might need to change the SRDEMO user's password to ORACLE in uppercase. After cleaning the compiled output and/or adjusting the connection properties and testing the connection, try re-running the unit tests to see them pass. 16 Run the Application The application is now ready to run. Right-click the UserInterface project in the Application Navigator and choose Run again to login and put the demo through its paces. When your default browser appears with the login page, remember to use the new welcome1 password for each user account you try during your testing! Related Documents [ADF to Trinidad Wiki page] [JDeveloper product center on OTN] [Original 10.1.3 SRDemo Sample (ADFBC Version)] [11g-migrated SRDemo Sample (ADFBC Version)] Revision Historyfalse ,,,,,,,,,,,,,,,,
http://www.oracle.com/technetwork/developer-tools/jdev/migration-082101.html
CC-MAIN-2015-40
en
refinedweb
This chapter contains these topics: What Is the Unified C API for XDK and Oracle XML DB? Using the XML Parser for C XML Parser for C Calling Sequence XML Parser for C Default Behavior DOM and SAX APIs Compared The single DOM is part of the unified C API, which is a C API for XML, whether the XML is in the database or in documents outside the database. DOM means DOM 2.0 plus non-standard extensions in XDK for XML documents or for Oracle XML DB for XML stored as an XMLType column in a table, usually for performance improvements. The unified C API is a programming interface that includes the union of all functionality needed by XDK and Oracle XML DB, with XSLT and XML Schema as primary customers. The DOM 2.0 standard was followed as closely as possible, though some naming changes were required when mapping from the objected-oriented DOM specification to the flat C namespace (overloaded getName() methods changed to getAttrName() and so on). Unification of the functions is accomplished by conforming contexts: a top-level XML context ( xmlctx) intended to share common information between cooperating XML components. Data encoding, error message language, low-level memory allocation callbacks, and so on, are defined here. This information is needed before a document can be parsed and DOM or SAX output. Both the XDK and the Oracle XML DB need different startup and tear-down functions for both contexts (top-level and service). The initialization function takes implementation-specific arguments and returns a conforming context. A conforming context means that the returned context must begin with a xmlctx; it may have any additional implementation-specific parts following that standard header. Initialization (getting an xmlctx) is an implementation-specific step. Once that xmlctx has been obtained, unified DOM calls are used, all of which take an xmlctx as the first argument. This interface (new for release 10.1) supersedes the existing C API. In particular, the oraxml interfaces (top-level, DOM, SAX and XSLT) and oraxsd (Schema) interfaces are deprecated. When the XML resides in a traditional file system, or the Web, or something similar, the XDK package is used. Again, only for startup are there any implementation-specific steps. First a top-level xmlctx is needed. This contains encoding information, low-level memory callbacks, error message language, and encoding, and so on (in short, those things which should remain consistent for all XDK components). An xmlctx is allocated with XmlCreate(). xmlctx *xctx; xmlerr err; xctx = (xmlctx *) XmlCreate(&err, "xdk context", "data-encoding", "ascii", ..., NULL); Once the high-level XML context has been obtained, documents may be loaded and DOM events generated. To generate DOM: xmldocnode *domctx; xmlerr err; domctx = XmlLoadDom(xctx, &err, "file", "foo.xml", NULL); To generate SAX events, a SAX callback structure is needed: xmlsaxcb saxcb = { UserAttrDeclNotify, /* user's own callback functions */ UserCDATANotify, ... }; if (XmlLoadSax(xctx, &saxcb, NULL, "file", "foo.xml", NULL) != 0) /* an error occured */ The tear-down function for an XML context, xmlctx, is XmlDestroy(). Once an xmlctx is obtained, a serialized XML document is loaded with the XmlLoadDom() or XmlLoadSax() functions. Given the Document node, all API DOM functions are available. XML data occurs in many encodings. You have control over the encoding in three ways: specify a default encoding to assume for files that are not self-describing specify the presentation encoding for DOM or SAX re-encode when a DOM is serialized Input data is always in some encoding. Some encodings are entirely self-describing, such as UTF-16, which requires a specific BOM before the start of the actual data. A document's encoding may also be specified in the XMLDecl or MIME header. If the specific encoding cannot be determined, your default input encoding is applied. If no default is provided by you, UTF-8 is assumed on ASCII platforms and UTF-E on EBCDIC platforms. A provision is made for cases when the encoding information of the input document is corrupt. For example, if an ASCII document which contains an XMLDecl saying encoding=ascii is blindly converted to EBCDIC, the new EBCDIC document contains (in EBCDIC), an XMLDecl which claims the document is ASCII, when it is not. The correct behavior for a program which is re-encoding XML data is to regenerate the XMLDecl, not to convert it. The XMLDecl is metadata, not data itself. However, this rule is often ignored, and then the corrupt documents result. To work around this problem, an additional flag is provided which allows the input encoding to be forcibly set, overcoming an incorrect XMLDecl. The precedence rules for determining input encoding are as follows: 1. Forced encoding as specified by the user. 2. Protocol specification (HTTP header, and so on). 3. XMLDecl specification is used. 4. User's default input encoding. 5. The default: UTF-8 (ASCII platforms) or UTF-E (EBCDIC platforms). Once the input encoding has been determined, the document can be parsed and the data presented. You are allowed to choose the presentation encoding; the data will be in that encoding regardless of the original input encoding. When a DOM is written back out (serialized), you can choose at that time to re-encode the presentation data, and the final serialized document can be in any encoding. The native string representation in C is NULL-terminated. Thus, the primary DOM interface takes and returns NULL-terminated strings. However, Oracle XML DB data when stored in table form, is not NULL-terminated but length-encoded, so an additional set of length-encoded APIs are provided for the high-frequency cases to improve performance (if you deliberately choose to use them). Either set of functions works. In particular, the following DOM functions are invoked frequently and have dual APIs: The API functions typically either return a numeric error code (0 for success, nonzero on failure), or pass back an error code through a variable. In all cases, error codes are stored and the last error can be retrieved with XmlDomGetLastError(). Error messages, by default, are output to stderr. However, you can register an error message callback at initialization time. When an error occurs, that callback will be invoked and no error printed. There are no special installation or first-use requirements. The XML DOM does not require an ORACLE_HOME. It can run out of a reduced root directory such as those provided on OTN releases. However, since the XML DOM requires globalization support, the globalization support data files must be present (and found through the environment variables ORACLE_HOME or ORA_NLS10). The C API for XML can be used for XMLType columns in the database. XML data that is stored in a database table can be accessed in an Oracle Call Interface (OCI) program by initializing the values of OCI handles, such as environment handle, service handle, error handle, and optional parameters. These input values are passed to the function OCIXmlDbInitXmlCtx() and an XML context is returned. After the calls to the C API are made, the context is freed by the function OCIXmlDbFreeXmlCtx(). An XML context is a required parameter in all the C DOM API functions. This opaque context encapsulates information pertaining to data encoding, error message language, and so on. The contents of this XML context are different for XDK applications and for Oracle XML DB applications. For Oracle XML DB, the two OCI functions that initialize and free an XML context have as their); New XMLType instances on the client can be constructed using the XmlLoadDom() calls. You first have to initialize the xmlctx, as in the example in Using DOM for XDK. The XML data itself can be constructed from a user buffer, local file, or URI. The return value from these is an (xmldocnode *) which can be used in the rest of the common C API. Finally, the (xmldocnode *) can be cast to a ( void *) and directly provided as the bind value if required. Empty XMLType instances can be constructed using the XmlCreateDocument() call. This would be equivalent to an OCIObjectNew() for other types. You can operate on the (xmldocnode *) returned by the above call and finally cast it to a (void *) if it needs to be provided as a bind value. XML data on the server can be operated on by means. The following table describes a few of the functions for XML operations. Here is an example of how to construct a schema-based document using the DOM API and save it to the database (you must include the header files xml.h and ocixmldb.h): ; } Here is an example of how to get a document from the database and modify it using; } The XML Parser for C is provided with the Oracle Database and the Oracle Application Server. It is also available for download from the OTN site: It is located in $ORACLE_HOME/xdk/ on UNIX systems. readme.html in the doc directory of the software archive contains release specific information including bug fixes and API additions. The XML Parser for C checks if an XML document is well-formed, and optionally, validates it against a DTD. The parser constructs an object tree which can be accessed through a DOM interface or the parser operates serially through a SAX interface. You can post questions, comments, or bug reports to the XML Discussion Forum at. There are several sources of information on specifications: The memory callback functions XML_ALLOC_F and XML_FREE_F can be used if you want to use your own memory allocation. If they are used, both of the functions should be specified. The memory allocated for parameters passed to the SAX callbacks or for nodes and data stored with the DOM parse tree are not freed until one of the following is done: XmlFreeDocument() is called. XmlDestroy() is called. If threads are forked off somewhere in the init-parse-term sequence of calls, you get unpredictable behavior and results. Table 14-3 lists the datatypes used in XML Parser for C. Figure 14-1 describes the XML Parser for C calling sequence as follows: XmlCreate() function initializes the parsing process. The parsed item can be an XML document (file) or string buffer. The input is parsed using the XmlLoadDom() function. DOM or SAX API: DOM: If you are using the DOM interface, include the following steps: The XmlLoadDom() function calls XmlDomGetDocElem(). This first step calls other DOM functions as required. These other DOM functions are typically node or print functions that output the DOM document. You can first invoke XmlFreeDocument() to clean up any data structures created during the parse process. SAX: If you are using the SAX interface, include the following steps: Process the results of the parser from XmlLoadSax() using callback functions. Register the callback functions. Note that any of the SAX callback functions can be set to NULL if not needed. Use XmlFreeDocument() to clean up the memory and structures used during a parse, and go to Step 5. or return to Step 2. Terminate the parsing process with XmlDestroy() The sequence of calls to the parser can be any of the following: XmlCreate() - XmlLoadDom() - XmlDestroy() XmlCreate() - XmlLoadDom() - XmlFreeDocument() - XmlLoadDom() - XmlFreeDocument() - ... - XmlDestroy() XmlCreate() - XmlLoadDom() -... - XmlDestroy() Figure 14-1 XML Parser for C Calling Sequence The following is the XML Parser for C default behavior: Character set encoding is UTF-8. If all your documents are ASCII, you are encouraged to set the encoding to US-ASCII for better performance. Messages are printed to stderr unless an error handler is provided. The default behavior for the parser is to check that the input is well-formed but not to check whether it is valid. Theproperty "validate" can be set to validate the input. The default behavior for whitespace processing is to be fully conforming to the XML 1.0 specification, that is, all whitespace is reported back to the application but it is indicated which whitespace is ignorable. However, some applications may prefer to set the property "discard-whitespace"which discards all whitespace between an end-element tag and the following start-element tag. Oracle XML parser for C checks if an XML document is well-formed, and optionally validates it against a DTD. The parser constructs an object tree which can be accessed through one of the following interfaces:.lLoadSax() call. A pointer to a user-defined context structure can also be included. That context pointer will be passed to each SAX function. The XML Parser and XSLT Processor can be called as an executable by invoking bin/xml: xml [options] [document URI] or xml -f [options] [document filespec] Table 14-4 demo/c/ subdirectory for full details of how to build your program. The $ORACLE_HOME/xdk/demo/c/ directory contains several XML applications to illustrate how to use the XML Parser for C with the DOM and SAX interfaces. To build the sample programs, change directories to the sample directory ( $ORACLE_HOME/xdk/demo/c/ on UNIX) and read the README file. This file explains how to build the sample programs. Table 14-5 lists the sample files: Table 14-6 lists the programs built by the sample files:
http://docs.oracle.com/cd/B13789_01/appdev.101/b10794/adx12api.htm
CC-MAIN-2015-40
en
refinedweb
#include <stdio.h> FILE *fdopen(int fildes, const char *mode); The fdopen() function shall associate a stream with a file descriptor. The mode argument is a character string having one of the following values:: The following sections are informative. None. File descriptors are obtained from calls like open(), dup(), creat(), or pipe(), which open files but do not return streams.() , the Base Definitions volume of IEEE Std 1003.1-2001, <stdio.h>
http://www.makelinux.net/man/3posix/F/fdopen
CC-MAIN-2015-40
en
refinedweb
that you know how to request message details via NNTP, you can encapsulate this into a separate class that is responsible for getting these details. This class will use its count field to keep track of the most recent message on the newsgroup, so each time it receives a response to the XOVER command, it can tell if new messages have arrived. The getNewSubjects method will then be responsible for returning an array of these new messages. Create the file NntpConnection.java: import java.util.*; import java.net.*; import java.io.*; public class NntpConnection { private BufferedReader reader; private BufferedWriter writer; private Socket socket; private int count = -1; public NntpConnection(String server) throws IOException { socket = new Socket(server, 119); reader = new BufferedReader( new InputStreamReader(socket.getInputStream( ))); writer = new BufferedWriter( new OutputStreamWriter(socket.getOutputStream( ))); reader.readLine( ); writeLine("MODE READER"); reader.readLine( ); } public void writeLine(String line) throws IOException { writer.write(line + "\r\n"); writer.flush( ); } public String[] getNewSubjects(String group) throws IOException { String[] results = new String[0]; writeLine("GROUP " + group); String[] replyParts = reader.readLine( ).split("\\s+"); if (replyParts[0].equals("211")) { int newCount = Integer.parseInt(replyParts[3]); int oldCount = count; if (oldCount == -1) { oldCount = newCount; count = oldCount; } else if (oldCount < newCount) { writeLine("XOVER " + (oldCount + 1) + "-" + newCount); if (reader.readLine( ).startsWith("224")) { LinkedList lines = new LinkedList( ); String line = null; while (!(line = reader.readLine( )).equals(".")) { lines.add(line); } results = (String[]) lines.toArray(results); count = newCount; } } } return results; } } The IRC bot will be written so that it spawns a new Thread to continually poll the newsgroup server. Performing this checking in a new Thread means that the bot is able to carry on doing essential tasks like responding to server pings. This new Thread is able to send messages to the IRC channel, as the sendMessage method in the PircBot class is thread-safe. The bot will also store the time it last found new articles and made an announcement. If it has been less than 10 minutes since the last announcement, the bot will not bother saying anything. This is useful when lots of messages are arriving on a moderated newsgroup, as these tend to arrive in large clusters in a short time. Create the bot in a file called NntpBot.java: import org.jibble.pircbot.*; public class NntpBot extends PircBot { private NntpConnection conn; private long updateInterval = 10000; // 10 seconds. private long lastTime = 0; public NntpBot(String ircServer, final String ircChannel, final String newsServer, final String newsGroup) throws Exception { setName("NntpBot"); setVerbose(true); setMessageDelay(5000); conn = new NntpConnection(newsServer); connect(ircServer); joinChannel(ircChannel); new Thread( ) { public void run( ) { boolean running = true; while (running) { try { String[] lines = conn.getNewSubjects(newsGroup); if (lines.length > 0) { long now = System.currentTimeMillis( ); if (now - lastTime > 600000) { // 10 minutes. sendMessage(ircChannel, "New articles posted to " + newsGroup); } lastTime = now; } for (int i = 0; i < lines.length; i++) { String line = lines[i]; String[] lineParts = line.split("\\t"); String count = lineParts[0]; String subject = lineParts[1]; String from = lineParts[2]; String date = lineParts[3]; String id = lineParts[4]; // Ignore the other fields. sendMessage(ircChannel, Colors.BOLD + "[" + newsGroup + "] " + subject + Colors.NORMAL + " " + from + " " + id); } try { Thread.sleep(updateInterval); } catch (InterruptedException ie) { // Do nothing. } } catch (Exception e) { System.out.println("Disconnected from news server."); } } } }.start( ); } } Note that the Thread is started from the NntpBot constructor and no PircBot methods are overridden—there is no need for this bot to respond to user input, unless you want to modify it to do so. The main method now just has to construct an instance of the bot, as the constructor also tells the bot to connect to the IRC server. Create the main method in NntpBotMain.java: public class NntpBotMain { public static void main(String[] args) throws Exception { NntpBot bot = new NntpBot("irc.freenode.net", "#irchacks", "news.kent.ac.uk", "ukc.misc"); } } Note that the constructor arguments specify which IRC server to connect to, which channel to join, which newsgroup server to connect to, and which newsgroup to monitor. If you want, you could make the bot more flexible by using the command-line arguments ( args) to specify the name of the server, channel, and so forth.. Author's note: Even if you're a busy person, there's still no need to have hundreds of console windows open at the same time. Console-based IRC clients are perfectly amenable to being used with the GNU screen tool, and can even help you hide your IRC windows from your boss. If you're regularly on the move, you need a way to keep track of IRC while away from your computer. Run a console-based IRC client in screen as a simple yet powerful solution. If you're running a text mode (console) IRC client on a remote system, it can be annoying having to reconnect if your connection drops or if you have to move to another machine. When you reconnect, you will no longer see the messages that were sent when you were last connected. GNU screen provides a neat solution to this problem. It allows you to disconnect from a terminal session without quitting your running programs. You can then log in and resume the screen session at a later time, giving the appearance that you never disconnected. screen is provided as a package on most Unix-based systems. If it isn't already installed, install the screen package or download and install it from source at. Starting screen is amazingly simple, yet many people overlook the usefulness of it. At a shell prompt, simply type: % screen If you get a startup message, just press Enter. You should then see a shell prompt. This is just like any other shell, but with one difference—every time you press Ctrl-A, it will be intercepted by screen. All of screen's commands are accessed by typing a different letter after this key. screen provides a short summary of the commands if you press Ctrl-A followed by the ? key. These combinations are often abbreviated to the form ^A ?. Probably the most useful command is the one that lets you "detach" from a screen session. Typing ^A d will detach your session, leaving the programs running inside the screen just as they were. To reattach to the session, just type: % screen -r You should now see the screen as you left it. You can also log out completely, and later log back in and reattach. By default, screen will also detach sessions when the terminal is closed, so screen sessions survive network connections dying and closing the terminal window. If for some reason your connection dies and the screen isn't detached, screen -r is not enough to reattach. You will need to run the command screen -r -d to detach and then reattach. Also, if you are running more than one screen, you need to give the pid (process ID) or name of the screen process that you want to reattach to after the -r parameter.. screen's other great strength is that it lets you run more than one program inside one terminal window. This makes it easy to leave several programs running and access all of them from another location, even if you are restricted to a very slow connection. This is achieved by supporting virtual windows inside the screen session. You can create a new window by pressing ^A c. Once you have more than one window, you can use ^A n or ^A <space> to go to the next window, and ^A p to go to the previous window. This feature is made even more flexible because screen allows windows to be split. This means you can see more than one window on the screen at a time. To split the screen, type ^A S. This one is case sensitive, so you will need to hold down the Shift key as well. This splits the window into two, and you should see a new blank window in the bottom half of the screen. Pressing ^A <tab> will change to this new window, and you can either change to another existing window by pressing ^A n, or you can create a new window. If you want to get rid of the split windows, ^A Q will hide all the inactive windows. The screenshot in Figure 14-8 shows screen with a split window, displaying irssi in a channel where system logs are sent to IRC, and the screen manual page in the bottom half. Figure 14-8. Screen with a split window If you have played with the split-window feature, you may have noticed you can have a window visible in several split windows at the same time. This is actually a very useful feature because screen allows you to attach to a screen session more than once. This is called multiple display mode, and you can use it to display the same window on multiple terminals, or you can display a different window on different terminals. To use it, simply add the -x option to the reattach command, so it becomes: % screen -r -x screen also has support for copy and paste from one window to another. Type ^A [ to start the copy, move the cursor with the arrow keys, and press Enter to start copying; then move the cursor to the end of the text you want to copy and press Enter again. The text that you have copied will be stored in memory until you use ^A ] to paste it. When you are selecting the text, there are some other keys that you can use. For example, pressing / will allow you to search within the text buffer, and Page Up and Page Down will scroll a full screen. More relevant to IRC is a script that checks that your IRC client is running so you don't even have to manually restart if it crashes or if the system you're running it on is rebooted. This makes use of the cron facility found on most Unix systems, along with a little bit of Bourne shell scripting. To edit your user's crontab, run this command: % crontab -e You can then create a new line in your crontab: */5 * * * * IRC=`screen -ls | grep -v Dead | grep "\\.irc"`; if [ "x$IRC" = "x" ]; then screen -dmS irc irssi;fi This causes the script to be run every five minutes. When it runs, it checks the output of screen -ls for a session called irc. If it doesn't find it, it starts screen in detached mode (with the options -dm) and names the session irc (option -S). screen will run the command irssi once it has started. If you want to use a different IRC client, you could replace the irssi with whatever you use to start your IRC client. command on the screen command line. You can do this by typing ^A :password and following the prompts. If you want to make this permanent after setting the password, edit the file ~/.screenrc (create it if it doesn't exist) and type password followed by ^A ] (this pastes the contents of the paste buffer). Your line should look something like NSQuRKGNxIEbw. Whenever someone runs screen -r from now on, they will be prompted for the password. The security provided by this is in addition to that provided by your login password, but it won't deter someone who is determined to get past if they have access to your system account. There isn't enough room to cover all of screen's features here; however, screen has a very good manual page so man screen will tell you lots more, such as how to remap keys to suit your tastes and how to allow multiple users to share a screen session. With screen, it's easy to run multiple IRC clients and access them from anywhere in the world.
http://archive.oreilly.com/lpt/a/5108
CC-MAIN-2015-40
en
refinedweb
Doing some more experiments I was jumping in ffmeg codebase In file aacpsy.c there are three include #include "avcodec.h" #include "aactab.h" #include "psymodel.h" And all of the three headers are in the same directory. now "aactab.h" is not found, I can't jump to it and even bovinate doesn't say anything about it. What is the different between the other two I don't get it?? Thanks, Andrea
http://sourceforge.net/p/cedet/mailman/attachment/AANLkTimq-1868KTZ2W7n8N18QTYjSwWKdHutXiqrGUQu@mail.gmail.com/1/
CC-MAIN-2015-40
en
refinedweb
Agenda See also: IRC log Previous minutes accepted no other business none Bob: Addressing - significant number of test cases complete. 50% complete. testing will proceed today/tomorrow. using regular telcon slot for this. nothing to discuss until we make further progress on the tests Tony: had attendance from three orgs. made progress towards completing the testing process. we are getting there. depends on holding a second f2f for this Philippe: two issues identified. F&P where we have implementations but only for part. problem is that we have no tests for F&P nor use of them either. second issue is MEPs: of the eight, only two are implemented Tony: other 6 were marked as "at risk" Yves: For XMLP, we still do not have a chair (see agenda item). we will be having a call to restart work in the interim Martin:For choreography, few issues with implementation. aside from that, pretty quiet. expect formal model and type system will be published as WG Note Carine: published first WD beginning of the month. dealing with new issues. f2f at the end of August. also working on example documentt Philippe: usage scenario document? Carine: not sure if we want best practices or not. could be quite different. could have one or two different types PaulC: For Policy, first f2f meeting in Austin last week. overviews of the documents done and some drill-down into issues, etc. named editing team that is producing first WD for the group that we hope to publish as first WD. confirmed next f2f in Sept and tentative in Nov (solicited hosts). we are working through August PaulD: For Databinding, one issue is practival testing work to demonstrate what we have been doing. tried to drum up support within WS-I community. not sure how effective that response will be. nothing to report Philippe: reviews up-coming f2f meetings. Additions from Jacek and PaulC. Philippe: Jonathan is not present, so we will skip this too Yves: we still do not have a chair. difficult for me to wrk on that, vacation. looking at potential candidates for chairing. will have to discuss constraints in the charter within the group Philippe: will have call tomorrow and discussion will center around one-way MEP... not sure it is really needed for WS-Addressing Yves: if needed by WS-Addressing, wll be quite late Philippe: hopefully, we will have decision for next CG Philippe: W3C received the WS-Eventing submission a couple months ago and we've got 3-4 W3C Members who are asking for a WG since then. I presented the case at the AC Meeting and it was not clear at the time that a WG would have to be created. I still do not believe at the time that we are ready to create a WG. WS-Notification progressing at OASIS and we would have to define the relationship with them. I don't see a desire for consensus in this area. I seek opinions within the CG as to whether we should do something or not PaulC: why do they want a WG? Philippe: I suppose to get a W3C REC PaulC: what do they want changed? Can't they use the submission? Philippe: not clear that they want changes anything in the spec. They certainly want to use a REC rahter than a submission PaulC: W3C should use its scarce resources in the manner in which its members want.. not sure what value there is in this. Philippe: The same value as for WS-Policy PaulC: WS-Eventing is being used at the hardware level. I don't know who is putting WS-Policy in hardware <pauld> WS-Addressing is similar in complexity and use as WS-Eventing Chris: somebody is putting ws-policy into hardware, and any number of the WS-* could end up in hardware. PaulC: are we talking about changing the namespace? Philippe: people who talked to me did not go into any specific issues beyond the fact that we don't have a wg. Steve: don't know why hardware is relevant at all. if enough members are interested in going this route, then it is up to them to deliver a proposed charter and go through the process Philippe: referring to the second point. charter are only proposed charters by the director. members can propose charters to the team, but do not see that happening here. Steve: if there is sufficient demand, then these things will happen PaulC: think that you should continue to collect data PaulD: always wonder about the question of whether to use WS-N or WS-E. Would it help the situation if W3C was starting a Group on WS-Eventing? Philippe: I don't think that we can simply ignore WS-N. the confusion is still out there, both are being used right now Martin: given the reconciliation as mentioned in the paper that was published... I would think that WS-N is doomed Philippe: thanks for the f/b... we will keep investigating Philippe: been invited to TMF tomorrow in the workshop to present the state of WS at W3C. They need to decide what to do in the area of management. which one should they pick, if any at all. just a kick-start meeting. don't expect a decision will be made. presenting WSDL, WS-A, WS-CDL and WS-P which are relevant in this space PaulD: main problem they have in this space, not sure can be addressed by W3C... lack of bidings to MQ, and other transports. have described things in WSDL, etc. wondering what w3c is doing in this area to help them. Maybe create a registry with all WS-* specs? Philippe: first goal is to convey information. don't want w3c to make any statements regarding wsdm or ws-man. not sure it will help them to have a registry in order to make the choice between WS-DM and WS-Mngt. Philippe: policy is not taking a break in August. will anyone be giving regrets for any of the three calls? Steve: 15th will be on the beach Philippe: Will maintain the CG calls to keep the pace going... next call August 1 PaulC: could you get agenda out earlier? Philippe: will endeavor to do so. PaulC: reason I am asking is that when we go long time without a call, harder to remember Philippe: apologies for being delinquent Steve: What is the state of ISO TC 68 WG4? Philippe: on my TODO list. Should get to it asap. ISO WG working on SWIFT... Steve was going to join that wg to help them understand CDL. ACTION: Philippe to follow-up on ISO TC 68 WG4 for Steve [recorded in]
http://www.w3.org/2006/07/18-ws-cg-minutes.html
CC-MAIN-2015-40
en
refinedweb
SOLVED [Request] Paste special (i.e. from other vector apps) Can there be a special Paste function that pastes in the middle of the current glyph view regardless of the coordinates of the vector art from which it came in Illustrator? Maybe Shift+Cmd+V? I see benefits for the current paste behavior, like if you pay close attention to position and want RF to honor it. But I think that often if vector art is pasted from elsewhere, it's lost off the main work area, and then it becomes a game of hide-and-seek. Video Would be great to have the option to paste it within view. TIA, Ryan here’s a quick way to move the pasted glyph contours to the origin position: g = CurrentGlyph() L, B, R, T = g.bounds g.moveBy((-L, -B)) hope this helps! Great thanks @gferreira! Is there a way before that to get the bezier data from the pasteboard and append that to CurrentGlyph? Is it NSPasteboard? the pasted contours are selected, so you can get the selection bounds and move only the selected contours to the origin: from mojo.events import addObserver class MyCustomPaste: def __init__(self): addObserver(self, "pasteCallback", "paste") def pasteCallback(self, notification): g = notification['glyph'] L = min([c.bounds[0] for c in g.selectedContours]) B = min([c.bounds[1] for c in g.selectedContours]) for c in g.selectedContours: c.moveBy((-L, -B)) MyCustomPaste() cheers! RoboFont uses the old aicbTools from @tal. a small example of getting copied vector data and draw it in the current glyph scaled to the x-height from lib.contrib.aicbTools import readAICBFromPasteboard, drawAICBOutlines data = readAICBFromPasteboard() source = RGlyph() drawAICBOutlines(data, source.getPen()) dest = CurrentGlyph() scaleToHeight = dest.font.info.xHeight bounds = source.bounds if bounds: minx, miny, maxx, maxy = bounds h = maxy - miny source.moveBy((-minx, -miny)) scale = scaleToHeight / h source.scaleBy(scale) dest.clear() dest.appendGlyph(source) Great thanks you two! I've combined these things into one script that does pretty much what I was looking to do. Here it is: # menuTitle : Paste to Baseline (Pasteline?) # shortCut : shift+command+v from lib.contrib.aicbTools import readAICBFromPasteboard, drawAICBOutlines g = CurrentGlyph() data = readAICBFromPasteboard() source = RGlyph() drawAICBOutlines(data, source.getPen()) bounds = source.bounds if bounds: L = bounds[0] B = bounds[1] source.moveBy((-L, -B)) g.appendGlyph(source) Pastaline?
https://forum.robofont.com/topic/800/request-paste-special-i-e-from-other-vector-apps
CC-MAIN-2022-40
en
refinedweb
A simple utility to interface with prompt-like (serial) consoles Project description serial_console Serial console is a small utility / library that makes it easier to interface with consoles that use a prompt-like interface. Instead of simply waiting until the connection times out, serial_console tries to continuously match for a prompt. # With a simple sh shell present on /dev/ttyS0, 9600 baudrate: $ serial_console -p '$ ' 'echo hello world!' hello world! Prompt matching serial_console reads byte for byte from the output buffer and tries to match for every new character it receives. It separates lines based on the OS's default line separator, and only matches on the current line, since prompts normally don't span multiple lines. Matching can be done on equality, substring, or regular expression matching. Prompts matched for equality must match the whole line. Substring matching simply looks for the existence of the prompt in the line. Regular expressions are also available, but may need some extra care to be made efficient. Regular Expressions Take the following regular expression, matching most Cisco iOS console prompts: ^([a-z][a-z0-9.-]{0,61}[a-z0-9]|[a-z])(\([a-z0-9 ._-][a-z0-9 ._-]*\))?([>#])$ Executed on every single new character this is a very costly expression. To try and speed this up, serial_console will look for an "end anchor group" in the regexp. Most prompts will end in non-alphanumeric characters that may not be commonly found in regular output, like $, #, or ?. If the regexp contains a group at its end, this group will be matched first before trying to match the whole expression. This will drop most non-matching lines quickly without having to try all iterations from the start of the regexp. So with the above example expression, serial_console would detect ([>#])$ as the "end anchor" and try to match for [>#]$ first. The regexp may also contain any amount of spacing characters between the group and the end of the expression, and the group may also be a non-matching group or named group: (?P<endanchor>[#$>?)]) ?\s*$ # results in the following end anchor regexp: [#$>?)] ?\s*$ Serial communication Serial communication is done with the help of the pyserial package and is thus mostly OS independent. It is also possible to specify console encoding, character mappings (f.i. to map '\r\n' to '\n') and the line seperator (although not with serial_console cli tool). Note on character mapping The -m cli argument makes it possible to easily specify one or multiple mappings. All mappings are applied in the order they have been supplied, but they are applied on the whole last read line on every new character. So with the following example: serial_console -m '\r\n' '\n' -m '\r' '\n' ... Even if the '\r\n' '\n' rule is defined first, the '\r' '\n' rule would match first for a line like foo output\r\n, and result in a mapped string foo output\n\n. Compatibility with other consoles While serial_console was mainly intended for use with "dumb" serial connections, it may just as well also be used with any console or interface that has the typical Pythonic IO API. An "interface" base class is available in serial_console.io.ConsoleIO. To use a custom ConsoleIO class, simply specify it as the console_io_class named parameter when instantiating a Console object. Calling Console::open will pass all *args and **kwargs straight through to the constructor of the console_io_class. import serial_console class MyCustomConsoleIO(serial_console.io.ConsoleIO): def __init__(self, foo_argument, bar_parameter=None): ... console = serial_console.console.Console('(?:[#$]) $', console_io_class=MyCustomConsoleIO) console.open('foo_argument', bar_parameter=123) Dependencies Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/hlfbt-serial-console/0.0.5/
CC-MAIN-2022-40
en
refinedweb
Introduction We will continue with the same repository as in the previous article on GraphQL scalars. You can clone the GitHub repository using this command git clone git@github.com:atherosai/graphql-gateway-apollo-express.git install dependencies with npm i and start the server in development with npm run dev You should be now able to access GraphQL Playground In this part, we will go through the fields that use Enum fields in our previously defined schema. If you are new to GraphQL, it might also be helpful to check out our previous articles on built-in scalars as well as the one on input object type. We use them as prerequisites for this article in terms of understanding GraphQL, and we will also use parts of the code that we built in previous articles. In the following image we illustrate the hierarchical tree graph of the response for our queries and mutations Enum Types Enums are basically a special type we can use to enumerate all possible values in the field. By using Enums we are adding another kind of validation to existing GraphQL schema. The specified values of the Enum type are the only possible options that are accepted. Now let’s go right into the implementation. Let’s consider that in our Task type we also define Enum for the task state. We design this enum type with the consideration that a given task can have one of three states: - ASSIGNED; - UNASSIGNED; - IN_PROGRESS; In the GraphQL query language we can write it in the following form enum TaskStateEnum {ASSIGNEDUNASSIGNEDIN_PROGRESS} It is common good practice to label enum values as capitalized values. In GraphQL language, these values are automatically mapped to the name of the value. However, when we rewrite the TaskStateEnum with graphql-js, we can also write the enum type in the following form: import {GraphQLEnumType,} from 'graphql';const TaskStateEnumType = new GraphQLEnumType({name: 'TaskStateEnum',values: {ASSIGNED: {value: 0,},UNASSIGNED: {value: 1,},IN_PROGRESS: {value: 2,},},});export default TaskStateEnumType The enum class allows us to map enum values to internal values represented by integers (or different strings etc.). By defining enumerated values to an integer leads you to design your schema in the most efficient way in terms of performance. In our case, we did not use any real database and did not send it to some more complex backend infrastructure. However, in production you will often use some monolithic or microservice architecture. It is much more efficient to send these enum values using integers, as the size is much smaller and can be transferred over the network more quickly. Also, it is more efficient to store integer values in the database. Including our Enum in the database Now we can implement our defined state in our Task definition: We will also use our TaskStateEnumType for validating input when creating task. The type called CreateTaskInput is defined as follows:; You can also see, that we defined defaultValue for state field with in-build function called getValue, which is available for all Enum objects. It is a good practice to define all enumerated fields at one place and then just reuse them through the whole codebase. All possible values for the state field are now available through introspection and can be viewed in GraphQL Playground in the Docs tab. The Schema can be also written in SDL:} Now let’s move on to result and input coercion for enums. If you do not know what that means, it might be useful to go through the article on scalars and its input and result coercion, where we explain these terms. Result coercion for enums If we receive a different value from the database, that is not defined in the enum type, an error is raised. For example, let’s change the value of the model Task state in the “in-memory” db (task-db.ts) to badstate. let tasks = [{id: '7e68efd1',name: 'Test task',completed: 0.0,createdAt: '2017-10-06T14:54:54+00:00',updatedAt: '2017-10-06T14:54:54+00:00',taskPriority: 1,progress: 55.5,state: "badstate",},];export default tasks; And then execute the following query for retrieving tasks from the server: query getTasks {tasks {idnamecompletedstateprogresstaskPriority}} The GraphQL server will raise the following error: Expected a value of type ”TaskStateEnumType” but received: badstate The important thing to recognize is also the fact that if we change the state from integer value 1 (UNASSIGNED) to the string value UNASSIGNED the GraphQL will also raises the same type of error: Expected a value of type ”TaskStateEnumType” but received: UNASSIGNED GraphQL server takes care about internal value mapping when dealing with result coercion, therefore will raise the error when no internal enum values matches the received data. If the data are matched to enumerated values, these data values are then mapped according to enum specification, e.g. 2 will be mapped to IN_PROGRESS. However, when we use float 1.0 as a value for the state field of the task in the “in-memory” DB. The value is then transformed to the integer 1 and GraphQL does not raise an error as the data are available in the TaskStateEnum specification. Input coercion for enums When we deal with input coercion for enums, we have to take into account additional validation for enumerated values. The GraphQL server will check if the values for the enum field matches defined values in the schema. Therefore if we execute the following mutation for adding task with state argument badstate mutation createTask {createTask(input: {name: "task with bad state", state: badstate}) {id}} the GraphQL server would raise the following error. Argument "input" has invalid value {name: "task with bad state", state: badState}.In field "state": Expected type "TaskStateEnumType", found badState. The common error when using Enums is the execution of this createTask mutation mutation createTask {createTask(input: {name: "Next task",state: "ASSIGNED"}) {task {idnamecompletedstateprogresstaskPriority}}} It is not always expected that we will get the following error Argument "input" has invalid value {name: "Next task", state: "ASSIGNED"}.In field "state": Expected type "TaskStateEnumType", found "ASSIGNED". The correct way to pass enum inline arguments in GraphQL Playground is just to specify enumerated values without the quotation mutation createTask {createTask(input: {name: "Next task",state: ASSIGNED}) {task {idnamecompletedstateprogresstaskPriority}}} This is different, of course, when we specify mutations using variables, as the variable has to be written according to JSON syntax specification. When using variables the GraphQL document will look as follows: mutation createTask($input: CreateTaskInput!) {createTask(input: $input) {task {idname}}} with this variables: {"input": {"name": "Next task","state": "ASSIGNED"}} Conclusion In GraphQL, Enum type can be used as input and output type. Enums are the great way to add additional validation constraints. We can list the values with introspection query: query introspectTaskStateEnumType {__type(name: "TaskStateEnum") {enumValues {name}}} and reuse the Enumerated types across our whole code base. Did you like this post? The repository with the examples and project set-up can be cloned from this branch. Feel free to send any questions about the topic to david@atheros.ai.
https://atheros.ai/blog/how-to-use-graphql-enum-type-and-its-best-practices
CC-MAIN-2022-40
en
refinedweb
Powerful CMS for Hugo. Zero headache. Drop our API-based CMS into your Hugo app in minutes. ButterCMS provides a component-based CMS and content API for Hugo and Hugo apps. Use ButterCMS to enable dynamic content in your apps for page content, blogs, and anything else. Most customers get our Hugo Hugo for a simple reason: Hugo developers can build solutions that marketing people love. Our API allows your content gurus to quickly spin up high-converting, dynamic landing pages, SEO pages, product marketing pages, and more, all using simple drag-and-drop functionality. The simplest Hugo Hugo app Our mission was to make it easy to integrate Butter with your existing Hugo app in minutes. It’s so simple! To demonstrate, here’s a mini tutorial to give you a feel for the process of adding marketing pages to your Hugo or Hugo app. Of course, you can also use our Collections to do advanced content modeling. For a full integration guide, check out our Official Guide for the ButterCMS Hugo API client. See how easily you can integrate the ButterCMS Pages API with your Hugo app. Seamless Hugo components Empower your marketing team with dynamic landing pages that align perfectly with your Hugo components. Components are the essential building blocks of any Hugo app, and ButterCMS handles them with ease. Our drag and drop interface makes it simple to structure your content to match existing Hugo components, and to create new reusable components whenever you need them. One Hugo CMS with everything you need There’s a reason so many developers are choosing a headless Hugo CMS. It’s easy to set up, offers flexible, customizable content modeling, and gives you access to our full Hugo API. ButterCMS saves you development time Most customers get our Hugo CMS up and running in less than an hour. Try it yourself! Simple as can be, with powerful features and great customer support. DILLON BURNS, FRONT END DEVELOPER, KEYME How to integrate ButterCMS into your Hugo application Just follow the simple steps below to complete the integration and begin creating pages with Butter. Be sure to check out our full guide to creating pages using the ButterCMS Hugo API. 1. First, install our ButterCMS Go SDK. go get github.com/buttercms/buttercms-go 2. Now, let's fetch some content from Butter! As soon as you create your Butter account, a sample page, "simple page", is set up for you. We can quickly use the Butter Go SDK to fetch your page and return it as a json response. import ( "github.com/buttercms/buttercms-go" ) ButterCMS.SetAuthToken("your_api_token") params := map[string]string{ "preview":"1" } ButterCMS.GetPage("*", "simple-page", params) With Butter, you can fetch all your pages, page types, collections, blog posts, and more as simple json responses for your Hugo app! It's really that easy!Get Started for Free Get to know the best headless CMS for Hugo ButterCMS is an API-based headless CMS. We're a hosted service and we maintain all the infrastructure. We play nicely with an expanding list of leading technologies, including: About ButterCMS Can I import my content? Yep. To import existing content from another platform, simply send us an email. Do you host my templates? Unlike CMSs. What kind of database can I use? No database required! We're a SaaS CMS or CaaS. You simply call our Content API from your app. We host and maintain all of the CMS infrastructure.
https://buttercms.com/hugo-cms/?utm_source=goweekly&utm_medium=email&utm_campaign=numberone
CC-MAIN-2022-40
en
refinedweb
Photo by Ian Taylor on Unsplash Create secure simple containers with the systemd tools Nspawnd and Portabled Isolation Ward The debate surrounding systemd, originally launched with the simple goal of replacing the ancient SysVinit scripts in most Linux distributions with a contemporary solution, has caused even venerable projects like Debian GNU/Linux to split into a pro-systemd faction (Debian) and an anti-systemd faction (Devuan). However you look at it, though, success has proved systemd originator Lennart Poettering right. No major distribution today would seriously consider replacing systemd with another solution. The init system's relevance is dwindling in any case in the age of containerized applications. If MariaDB is just a container you need to launch, then the init system hardly needs to perform any magic. If you follow Red Hat, SUSE, and its offspring, clearly containers is where the journey is headed (see the "Container Advantages" box). A container-first principle now applies to all enterprise distributions, with the exception of Debian. Systemd has a few aces up its sleeve that most admins don't even know about – not least because of the sometimes almost hysterical controversies surrounding the product. Container Advantages From the point of view of both vendors and software producers, containers are convenient, with the distribution only having to provide a few components: a kernel and a runtime environment. The software provider, in turn, also only needs one container in their portfolio because it runs on basically every system with a functional container runtime. Where Red Hat and its associated distros used to have to maintain different versions of MariaDB, PostgreSQL, and practically all the relevant tools for their own distributions, today they only provide a shell and a kernel. The provider of the software itself steps into the breach and offers precisely one container that runs everywhere. Brave new world – and so elegant. As great as this hip stuff may be, the inventory of current IT environments will remain around for a while yet, as well as the question of how this inventory can be used and managed more sensibly and in a better way. What is particularly annoying is that conventional environments do not benefit from the many advantages that containers undoubtedly offer, such as the separation of permissions, isolated access to your filesystems, and monitored network traffic. In the container context, these functions include Nspawnd and Portabled. When deployed correctly, they draw on features from the container world to make conventional applications more secure. If you use Nspawnd wisely, you could even save yourself the trouble of needing Docker or Podman. In this article, I provide an introduction to these two functions and explain how you can use the solutions to supplement your own setups. Unknown Container Runtime When asked about runtime environments for containers, most admins intuitively think of one of two candidates: Docker or Podman. Docker returned containers on Linux to the land of the living and provided a decent business model. That containers are considered commercially attractive at all today is largely thanks to Docker's persistent work. Podman, on the other hand, is known by most admins as the anti-Docker solution created by Red Hat that exists because the Docker developers once tangled with the crimson chapeau and, as expected, got the wrong end of the stick. Because Podman is meant to work as a one-to-one replacement for Docker. However, it adopts much of its architectural assumptions, and they're tough, because the Docker notion of containers is complex (Figure 1) and can overwhelm you with feature bloat. Containers should be simple. All container implementations on the market ultimately rely on a relatively small set of security features that the Linux kernel itself has offered for a few years. No container implementation can do without namespaces, which logically separate individual parts of the system (Figure 2). A network namespace, for example, lets you create virtual network cards without giving them direct access to the physical NICs of the host. Instead, this access must be established by a bridge or some other means. Namespaces do not only exist for network stacks; they also apply to individual points in the filesystem, to process IDs, and to the assignment of user IDs on a Linux system. They always work along the same principle: As soon as a certain process starts in a namespace, the namespace acts like a jail from which it is impossible to break out. Control groups (cgroups) are added on top in many container environments. Again, they are deeply embedded in the Linux kernel. In very simplified terms, cgroups control access by individual processes to the system's resources. They complement namespaces nicely because they help you enforce an even tighter set of rules for applications and processes than would be possible with namespaces alone. More than Runtimes If Nspawnd is a runtime environment for containers, yet at least two well-functioning environments already exist in the form of Docker and Podman, why, some might ask, does Poettering have his fingers in the pie again? The answer to this question is stunningly simple: systemd-nspawnd targets admins who really only want to use basic kernel features to isolate individual processes. The problem with Podman and Docker, after all, is that you never just get the program in question. Instead, they come with a huge pile of assumptions and prerequisites about how to run a container well and sensibly. You might not even want to deal with things like volumes, software-defined networking, and other stuff if all you want to do is put an Apache process in a virtual jail. Also, you might not want to install dozens of megabytes of additional software for Docker or Podman, thereby raising the maintenance overhead, although this step is not strictly necessary from a functional point of view. Anyone who can see themselves in this scenario – simple containers that use built-in tools without too much tinsel – is a candidate for the target group that Nspawnd has in mind, even if the scope of Nspawnd has naturally expanded in recent years. The daemon has been part of systemd since 2015, so it's an old acquaintance. The "N" in the name – you probably guessed it after following the article up to this point – stands for "namespaces." Reduced to the essential facts, Nspawnd is a tool that sets up the namespaces required for isolated operation of applications and then starts the applications. Some developers jokingly refer to it as "Chroot on steroids," which works well as a metaphor. In the context of concrete technology, however, the comparison is misleading. Containers, Pronto! Nspawnd is now included in most distributions, so a container can be created on a normal Linux system in next to no time. Creating a usable template takes longest; in Docker or Podman parlance, this would be referred to as an image. Nspawnd only requires a working filesystem on a Linux distribution. You can put this in place in different ways. The following example assumes Debian GNU/Linux 11 alias "Bullseye" as the distribution used in the container. In the first step, you build an empty folder on a Debian system after installing the debootstrap package (Figure 3), which contains a basic Bullseye system: # debootstrap --arch amd64 bullseye /mnt/containers/bullseye-1 To log in to the container as root, pts/0 must be in /etc/securetty: echo "pts/0" >> /mnt/containers/bullseye-1/etc/securetty If you now want to start a running container from the directory you just created, type: systemd-nspawn -D /mnt/containers/bullseye-1 You can now run passwd to change the password for root in the container or add new users. All other commands that you will be familiar with from a normal Debian system are available to you. The recommendation is to store central files such as the package sources in the template and to update the package sources in the template immediately by running apt update. You need to delete the /etc/hostname file in the template so that the container uses the name assigned by Nspawnd later. Finally, D-Bus needs to be installed in the container because the machinectl userland tool (Figure 4), which you use to control the containers from the host, cannot communicate with the particular container otherwise. Once the template is created, copy it to a location with a suitable name (e.g., /var/lib/machines/webserver-1). The next step is to start the container with: systemd-nspawn -M webserver-1 -b -D webserver-1 After that, you can install the userland software that you want to run in the container (e.g., an Apache web server). Buy this article as PDF (incl. VAT)
https://www.admin-magazine.com/Archive/2022/67/Create-secure-simple-containers-with-the-systemd-tools-Nspawnd-and-Portabled/(tagID)/2
CC-MAIN-2022-40
en
refinedweb
ClassicRunner class An object of this class is used to manage multiple Eyes sessions when working without the Ultrafast Grid. To work with the Ultrafast Grid, use VisualGridRunner instead of this class. Import statement from applitools.selenium import ClassicRunner ClassicRunner method Syntax runner = ClassicRunner() Parameters This method does not take any parameters. Return value Type: ClassicRunner get_all_test_results method Syntax runner = runner.get_all_test_results(should_raise_exception) runner = runner.get_all_test_results() Parameters should_raise_exception Type: bool [Optional : default = True ].exception property to examine the exception status and, if it is null, check the various methods in the TestResults returned by the method test_results to see if the tests passed or mismatched where found. If no parameter is passed, then the default value is True. Return value Type: TestResultsSummary
https://applitools.com/tutorials/reference/sdk-api/selenium/python_sdk4/classicrunner
CC-MAIN-2022-40
en
refinedweb
Entering edit mode When I try to do simply like this: import vcf vcf_reader = vcf.Reader(filename="in.vcf.gz") there is an error: AttributeError: partially initialized module 'vcf' has no attribute 'Reader' (most likely due to a circular import) But vcf module has that attribute .. Kindly help. also it sounds like your installation of pyvcfis messed up. I would consider trying the version in conda; I always read it by pandas (after removing the heads). personally, I just use GATK VariantsToTableto convert it to a .tsv first. Its much easier to parse this way. Unless you wanted something from the header? Another option might to be convert to another tabular format such as .maf
https://www.biostars.org/p/416324/#9480044
CC-MAIN-2022-40
en
refinedweb
Lead Image © Dmytro Demianenko, 123RF.com Dialing up security for Docker containers Container Security Container systems like Docker are a powerful tool for system administrators, but Docker poses some security issues you won't face with a conventional virtual machine (VM) environment. For example, containers have direct access to directories such as /proc, /dev, or /sys, which increases the risk of intrusion. This article offers some tips on how you can enhance the security of your Docker environment. Docker Daemon Under the hood, containers are fundamentally different from VMs. Instead of a hypervisor, Linux containers rely on the various namespace functions that are part of the Linux kernel itself. Starting a container is nothing more than rolling out an image to the host's filesystem and creating multiple namespaces. The Docker daemon dockerd is responsible for this process. It is only logical that dockerd is an attack vector in many threat scenarios. The Docker daemon has several security issues in its default configuration. For example, the daemon communicates with the Docker command-line tool using a Unix socket (Figure 1). If necessary, you can activate an HTTP socket for access via the network. The problem is: HTTP access is not secure. If you want security at the Docker daemon level, one option is to implement a system using TLS with client certificates. To set up TLS with Docker, go to the configuration file of the Docker daemon and enable tlsverify, tlscacert, tlscert, and tlskey [1] with the corresponding files as parameters. Then store those files in the client configuration in the folder ~/.docker; use the keywords tls, tlscert, tlskey, and tlscacert to ensure that Docker also finds them; and set the environment variable DOCKER_TLS_VERIFY to 1. If you then call the Docker command at the command line, Docker automatically uses the certificates to log on to the server. You either have to buy the appropriate certificates for this approach, or you are forced to operate a suitable certificate authority (CA) yourself. Tools like TinyCA [2] help make this task a little less painful (Figure 2). Privileges System A seasoned admin will keep the amount of work performed with root system administrator privileges to the absolute minimum. With Docker containers, the root account is still a dangerous thing: Because a container is not a complete VM, user management within the container and outside it are also not completely separate. An admin who starts a container with root privileges will have to contend with a whole bunch of far-reaching authorizations. Docker itself comes with several features to make sure that the privilege level does not become a problem. When a container is started, it creates several namespaces at kernel level in the background, one for the network connections and one for the processes within the container. Applications within the container therefore have no way to get information about the processes outside the container – provided that there is no vulnerability in the namespace implementation of the Linux kernel. Docker also creates control groups for new containers. Control groups limit the access to resources by the container, so that a single container, for example, cannot use all the CPU power available on the host. Because the container also requires access to central directories such as /sys, the Docker developers have adopted the concept of Linux capabilities in Docker. A Docker container itself only has the rights of an unprivileged application – unless the admin chooses otherwise. In addition, a whole series of additional capabilities, such as CHROOT to allow the chroot function to be executed in a container or SYS_RAWIO to get direct access to storage devices, provide some protection. With the --cap add and --cap-drop options for Docker, admins can grant each container only the privileges it actually needs. For example, if a container needs to use a privileged port – one with a number lower than port 1024 – you simply activate the NET_BIND_SERVICE capability flag for the container. Another approach found in various documents is the --privileged switch. If you start a container in privileged mode, it can do virtually everything that root is allowed to do on the host system. Unlike the truism about working as root, many users actually follow this recommendation – they give the container comprehensive permissions, although it probably doesn't need them at all. One of the most important recommendations for safe Docker operation is to start containers in privileged mode only in absolutely exceptional cases and always check carefully whether it is absolutely necessary. The potential damage that can be caused by a privileged container running amok includes taking down the host. Since version 1.10, Docker also supports the possibility to use separate user namespaces for containers. User administration within the container is then completely isolated from the outside. Start the Docker daemon with the --userns-remap parameter to enable separate user namespaces. If you enable this feature, the container runs in a user namespace that maps the user root of the host system to an arbitrary UID, although the service running in the container still believes it has root privileges (Figure 3). Seccomp Several external tools are also available to help with Docker security. One example is the seccomp function, which is part of the Linux kernel and was originally developed for Google Chrome. Seccomp is designed to limit the permitted system calls (syscalls) to the absolute minimum necessary. Syscalls are functions anchored in the kernel on any POSIX-compatible operating system that can be called by external programs. The most popular system calls in Linux are those for interaction with files in filesystems, namely open(), read(), and write(). However, the kernel also supports system calls that deeply affect the running system, such as clock_settime(), which can change the time of the target system; mount() is also a system call. Seccomp lets you define profiles : The profile determines which system calls a program has access to [3]. The target program must support seccomp, because it must select the profile with which it will be associated and set it using the seccomp() syscall. If a program that is restricted by a seccomp profile attempts to execute a syscall that is not explicitly allowed in the profile, the kernel of the host operating system sends the SIGKILL signal without further ado. Seccomp functionality acts as an extension of the Linux capability system. Not all operations that an admin might want to stop a Docker container from doing can be covered by capabilities – seccomp jumps into the breach, providing a way to restrict operations that capabilities can't control. Since version 1.10, Docker is able to set seccomp profiles based on individual containers. Buy this article as PDF (incl. VAT)
https://www.admin-magazine.com/Articles/Dialing-up-security-for-Docker-containers
CC-MAIN-2022-40
en
refinedweb
Python: write a Twitter client Why fill up the internet with pointless 140-character drivel yourself when you can write an application to do it for you? This issue we’re going to create our own Twitter application using Python and two libraries: Tweepy, a Twitter Python library, and our old favourite EasyGUI, a library of GUI elements. This project will cover the creation of the application using Python and also the configuration of a Twitter application using the Twitter development website dev.twitter.com. Tweepy is a Python library that enables us to create applications that can interact with Twitter. With Tweepy we can: - Post tweets and direct messages. - View our time line. - Receive mentions and direct messages. Now you may be thinking “Why would I want to use Python with Twitter?” Well, dear reader, quite simply we can use Python to build our own applications that can use Twitter in any of the ways listed above. But we can also use Twitter and Python to enable interaction between the web and the physical world. We can create a script that searches for a particular hashtag, say #linuxvoice, and when it finds it, an LED can flash, a buzzer can buzz or a robot can start navigating its way around the room. In this tutorial we will learn how to use Tweepy and how to create our own application. At the end of this project you will have made a functional Twitter client that can send and receive tweets from your Twitter account. Downloading Tweepy and EasyGUI Tweepy The simplest method to install Tweepy on your machine is via Pip, a package manager for Python. This does not come installed as standard on most machines, so a little command line action is needed. The instructions below work for all Debian- and Ubuntu-based distros. First, open a terminal and type sudo apt-get update to ensure that our list of packages is up to date. You may be asked for your password – once you have typed it in, press the Enter key. You will now see lots of on-screen activity as your software packages are updated. When this is complete, the terminal will return control to you, and now you should type the following to install Pip. If you are asked to confirm any changes or actions, please read the instructions carefully and only answer ‘Yes’ if you’re happy. sudo apt-get install python-pip With Pip installed, our attention now shifts to installing Tweepy, which is accomplished in the same terminal window by issuing the following command. sudo pip install tweepy Installation will only take a few seconds and, when complete, the terminal will return control to you. Now is the ideal time to install EasyGUI, also from the Pip repositories. To create an application you will need to sign in with the Twitter account that you would like to use with it. Twitter apps Twitter will not allow just any applications to use its platform – all applications require a set of keys and tokens that grant it access to the Twitter platform. The keys are: - consumer_key - consumer_secret And the tokens are: - access_token - access_token_secret To get this information we need to head over to and sign in using the Twitter account that we wish to use in our project. It might be prudent to set up a test account rather than spam all of your followers. When you have successfully signed in, look to the top of the screen and you’ll see your Twitter avatar; left-click on this and select “My Applications”. You will now see a new screen saying that you don’t have any Twitter apps, so let’s create our first Twitter app. To create our first app, we need to provide four pieces of information to Twitter: - The name of our application. - A description of the application. - A website address, so users can find you. (This can be completed using a placeholder address.) - Callback_URL. This is where the application should take us once we have successfully been authenticated on the Twitter platform. This is not relevant for this project so you can either leave it blank or put in another URL that you own. After reading and understanding the terms and conditions, click on “I Agree”, then create your first app. Right about now is an ideal time for a cup of tea. With refreshment suitably partaken, now is the time to tweak the authentication settings. Twitter has auto generated our API key and API secret, which are our consumer_key and consumer_secret respectively in Tweepy. We can leave these as they are. Our focus is now on the Access Level settings. Typically, a new app will be created with read-only permissions, which means that the application can read Twitter data but not post any tweets of direct messages. In order for the app to post content, it first must be given permission. To do this, click on the “modify app permissions” link. A new page will open from which the permissions can be tweaked. For this application, we need to change the settings to Read and Write. Make this change and apply the settings. To leave this screen, click on the Application Management title at the top-left of the page. We now need to create an access token, which forms the final part of our authentication process. This is located in the API Keys tab. Create a new token by clicking Create My Access Token. Your token will now be generated but it requires testing, so scroll to the top-right of the screen and click “Test OAUTH”. This will test your settings and send you to the OAuth Settings screen. In here are the keys and tokens that we need, so please grab a copy of them for later in this tutorial. These keys and tokens are sensitive, so don’t share them with anyone and do not have them available on a publicly facing service. These details authenticate that it is YOU using this application, and in the wrong hands they could be used to send spam or to authenticate you on services that use the OAuth system. With these details in hand, we are now ready to write some Python code. Creating a new application is an easy process, but there are a few hoops to jump through in order to be successful. Python For this tutorial, we’ll use the popular Python editor Idle. Idle is the simplest editor available and it provides all of the functionality that we require. Idle does not come installed as standard, but it can be installed from your distribution’s repositories. Open a new terminal and type in the following. For Debian/Ubuntu-based systems sudo apt-get install idle-python2.7 With Idle now installed it will be available via your menu, find and select it to continue. Idle is broken down into two areas: a shell where ideas can be tried out, and where the output from our code will appear; and an editor in which we can write larger pieces of code (but to run the code we need to save and then run the code). Idle will always start with the shell, so to create a new editor window go to File > New and a new editor window will appear. To start with, let’s look at a simple piece of test code, which will will ensure that our Twitter OAuth authentication is working as it should and that the code will print a new tweet from your timeline every five seconds. import tweepy from time import sleep import sys In this first code snippet we import three libraries. The first of these is the tweepy library, which brings the Twitter functionality that we require. We import the sleep function from the time library so that we can control the speed of the tweets being displayed. Finally we import the sys library so that we can later enable a method to exit the Twitter stream. consumer_key = "API KEY" consumer_secret = "API SECRET" access_token = "=TOKEN" access_token_secret = "TOKEN SECRET" In this second code snippet we create four variables to store our various API keys and tokens. Remember to replace the text inside of the “ “ with the keys and tokens that you obtained via Twitter. auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_token_secret) For the third code snippet we first create a new variable called auth, which stores the output of the Tweepy authorisation handler, which is a mechanism to connect our code with Twitter and successfully authenticate. api = tweepy.API(auth) public_tweets = api.home_timeline() The fourth code snippet creates two more variables. We access the Twitter API via Tweepy and save the output as the variable api. The second variable instructs Tweepy to get the user’s home timeline information and save it as a variable called public_tweets. for tweet in public_tweets: try: print tweet.text sleep(5) except: print("Exiting") sys.exit() The final code snippet uses a for loop to iterate over the tweets that have been gathered from your Twitter home timeline. Next up is a new construction: try and except. It works in a similar fashion to if and else, but the try and except construction is there to follow the Python methodology that it’s “Easier to ask for forgiveness than for permission”, where try and except relates to forgiveness and if else refers to permission. Using the try and except method is seen as a more elegant solution – you can find out why at. In this case we use try to print each tweet from the home timeline and then wait for five seconds before repeating the process. For the except part of the construction we have two lines of code: a print function that prints the word “Exiting”, followed by the sys.exit() function, which cleanly closes the application down. With the code complete for this section, save it, then press F5 to run the code in the Idle shell. Applications are set to be read-only by default, and will require configuration to enable your application to post content to Twitter. Sending a tweet Now that we can receive tweets, the next logical step is to send a tweet from our code. This is surprisingly easy to do, and we can even recycle the code from the previous step, all the way up to and including: api = tweepy.API(auth) And the code to send a tweet can be easily added as the last line: api.update_status("Tinkering with tweepy, the Twitter API for Python.") Change the text in the bracket to whatever you like, but remember to stay under 140 characters. When you’re ready, press F5 to save and run your code. There will be no output in the shell, so head over to your Twitter profile via your browser/Twitter client and you should see your tweet. We covered EasyGUI in LV006, but to quickly recap, it’s a great library that enables anyone to add a user interface to their Python project. It’s easier to use than Tkinter, another user interface framework, and ideal for children to quickly pick up and use. For this project we will use the EasyGUI library to create a user interface to capture our status message. We will then add functionality to send a picture saved on our computer. Using EasyGUI we can post new messages to the desktop via the msgbox function. Adding a user interface Open the file named send_tweet.py and let’s review the contents. import tweepy from time import sleep import sys import easygui as eg This code snippet only has one change, and that is the last line where we import the EasyGUI library and rename it to eg. This is a shorthand method to make using the library a little easier. consumer_key = "Your Key" consumer_secret = "Your secret” access_token = "Your token" access_token_secret = "Your token" auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_token_secret) api = tweepy.API(auth) These variables are exactly the same as those previously. message = eg.enterbox(title="Send a tweet", msg="What message would you like to send?") This new variable, called message, stores the output of the EasyGUI enterbox, an interface that asks the user a question and captures their response. The enterbox has a title visible at the top of the box, and the message, shortened to msg, is a question asked to the user. try: length = len(message) if length < 140: api.update_status(message) else: eg.msgbox(msg="Your tweet is too long. It is "+str(length)+" characters long") except: sys.exit() For this final code snippet we’re reusing the try except construction. Twitter has a maximum tweet length of 140 characters. Anything over this limit is truncated, so we need to check that the length is correct using the Python len function. The len function will check the length of the variable and save the value as the variable length. With the length now known, our code now checks to see if the length is less than 140 characters, and if this is true it runs the function update_status with the contents of our message variable. To see the output, head back to Twitter and you should see your tweet. Congratulations! You have sent a tweet using Python. Now let’s put the icing on the cake and add an image. EasyGUI looks great and is an easy drop-in-replacement for the humble print function. Adding an image to our code The line to add an image to our tweet is as follows image = eg.fileopenbox(title="Pick an image to attach to your tweet") We create a variable called image, which we use to store the output from the EasyGUI fileopenbox function. This function opens a dialog box similar to a File > Open dialog box. You can navigate your files and select the image that you wish to attach. Once an image is chosen, its absolute location on your computer is saved as the variable image. The best place to keep this line of code is just above the line where the status message is created and saved as a variable called message. With the image selection handled, now we need to modify an existing line so that we can attach the image to the update. Navigate to this line in your code: api.update_status(message) And change it to this: api.update_with_media(image, status=message) Previously we just sent text, so using the update_status function and the message contents was all that we needed, but to send an image we need to use the update_with_media function and supply two arguments: the image location, stored in a variable for neatness; and the status update, saved as a variable called message. With these changes made, save the code and run it by pressing F5. You should be asked for the images to attach to your code, and once that has been selected you will be asked for the status update message. With both of these supplied, the project will post your update to Twitter, so head over and check that it has worked. Sending an image is made easier via a GUI interface that enables you to select the file that you wish to send. Once selected, it saves the absolute path to the file. Extension activity Following these steps, we’re managed to make two scripts that can read our timeline and print the output to the shell, but we can also merge the two together using an EasyGUI menu and a few functions. The code for this activity is available via the GitHub repository, so feel free to examine the code and make the application your own.
https://www.linuxvoice.com/python-write-a-twitter-client/
CC-MAIN-2022-40
en
refinedweb
str_to_sympy function¶ (Shortest import: from brian2.parsing.sympytools import str_to_sympy) - brian2.parsing.sympytools.str_to_sympy(expr, variables=None)[source]¶ Parses a string into a sympy expression. There are two reasons for not using sympifydirectly: 1) sympify does a from sympy import *, adding all functions to its namespace. This leads to issues when trying to use sympy function names as variable names. For example, both betaand factor– quite reasonable names for variables – are sympy functions, using them as variables would lead to a parsing error. 2) We want to use a common syntax across expressions and statements, e.g. we want to allow to use and(instead of &) and function names like ceil(instead of ceiling). - Parameters expr : str The string expression to parse. variables : dict, optional - Returns s_expr : A sympy expression Raises SyntaxError In case of any problems during parsing.
https://brian2.readthedocs.io/en/latest/reference/brian2.parsing.sympytools.str_to_sympy.html
CC-MAIN-2022-40
en
refinedweb
Smashing Python tech debt with Polymorphism Polymorphism is a pillar of Object Oriented Programming. OOP underpins modern software development in many industries. This article will dive in to Polymorphism by using a real world non-contrived example. That’s right: Polymorphism will not be explained in terms of animals or Dogs or Ducks but in terms of a real world problem that was solved via Polymorphism. After giving a super low level real-world example there will be discussion on higher level details of Polymorphism. Abstracting away differences in third party APIs Imagine a cloud-based tool that analyses GitHub pull requests and adds comments to the Pull Request for any issues found. This is achievable because GitHub provides an API that the code review tool can GET information from and POST the review to. Nice and simple: use an off-the-shelf API client like PyGithub. However, things get more complicated when you find out the code review tool also supports Bitbucket and has plans to soon support GitLab. This complicates things because GitHub, Bitbucket, and GitLab of course do not have a unified API. To comment on a pull request: - Github: /repos/{org}/{repo}/pulls/{number}/reviews - Bitbucket: /repositories/{space}/{repository}/pullrequests/{number}/ - GitLab: /projects/{repo}/merge_requests/{number}/discussions These different Source Control Management systems (SCM) also have different strategies to authentication, authorization, different statuses for PRs (draft, open, etc). They even have different names for what GitHub calls PRs! These difference were not an issue in the beginning because initially only GitHub was supported by the code review tool, meaning only one client was needed: GitHubClient. One simple client and a few method. As the pull request analysis tool grew and Bitbucket support was introduced, BitbucketClient was written. Before the Bitbucket integration everywhere in the codebase used GitHubClient. Everywhere methods on GitHubClient were invoked so when the developer needed to invoke BitbucketClient to authenticate he had to write some extra code: he would need to make decisions using if conditions. Same story for checking permissions, same story for pulling the code, and for commenting on the pull request. After a couple of weeks it was determined this approach was not growing well. Lots of duplication. Lots of creepy self aware “what am I!?” code. Conditionals everywhere. It was a mess of spaghetti. The problem was identified and a design session was held. Instead of if statements everywhere imagine a Polymorphic API client: two API clients with exactly the same methods but each talk to different APIs. This approach massively simplifies the problem by abstracting away the differences. To keep things simple, we will illustrate with only three methods: approve_pull_request, comment_on_pull_request, and check_permissions. You can probably guess what they do. Abstract ApiClient First we use the built-in abc module (which stands for Abstract Base Classes) to define an Abstract Base Class for the API client: import abcclass AbstractClient(abc.ABC): @abc.abstractmethod def approve_pull_request(self, commit_id): pass @abc.abstractmethod def comment_on_pull_request(self, comments, commit_id, body): pass The idea is that next we write a number concrete API client that sub-class AbstractClient. The advantage of this approach is a common interface is guaranteed: if GitHubClient or BitbucketClient are missing either approve_pull_request or comment_on_pull_request then python will throw an exception: class GitHubClient(AbstractClient): pass>>> client = GitHubClient()TypeError: Can’t instantiate abstract class GitHubClient with abstract methods approve_pull_request, comment_on_pull_request Abstracting away GitHub’s API Lets look at the concrete implementation of the GitHub API client. To oversimplify, GitHub’s API documentation provides the guidance: To determine if the application has the permissions needed the clone the repository (we’re trying to analyze the code in the pull request, after all) and check if the application permission to review and comment, the API client must: - Authenticate as an app and get a payload showing the permissions. - The payload is a dict that looks like {“checks”: “write”, “contents”: “read”} To approve a pull request In GitHub API the API client must: - Authenticate as an app and get a Javascript Web Token - Perform a POST request to /{org}/{repo}/statuses/{commit_id} exposing the JWT to the API in a header. To comment on a pull request the API must: - Authenticate as an app and get a Javascript Web Token - Perform a POST request to /repos/{org}/{repo}/pulls/{number}/reviews exposing the JWT to the API in a header. Putting this together we end up with a permissions container and API client like so: class GitHubPermissions(client.AbstractPermissions): @property def has_permission_read_content(self): return self.permissions.get("contents") == "read" @property def has_permission_status_write(self): return self.permissions.get("statuses") == "write" class GitHubClient(client.AbstractClient): def approve_pull_request(self, description): response = self.session.post( url=self.urls[‘pull_request’], json={ "sha": commit_id, "state": "success", "context": “Code Review Doctor” } ) response.raise_for_status() def comment_on_pull_request(self, comments, commit_id, body): data = { "commit_id": commit_id, "event": "REQUEST_CHANGES", "body": body, } response = self.session.post( url=self.urls["reviews"], json=data ) response.raise_for_status() Performing the code analysis Now we have Polymorphic API clients the code that interacts with the clients can ignore any differences between the Bitbucket API, the GitHub API, and any future APIs we integrate with: def review_pull_request( permissions, api_client, source_commit, issues ): # the real code does more stuff, but removed it for simplicity if not permissions.has_permission_read_content: logger.warning('does not have permissions: contents read') returnif not permissions.has_permission_status_write: logger.warning('does not have permissions: statuses write') return if issues: api_client.comment_on_pull_request( commit_id=commit_id, body='some improvements needed' ) else: api_client.approve_pull_request(commit_id=source_commit) This code results in Python and Django fixes suggested right inside the pull requests: You can check your GitHub or Bitbucket Pull Requests, scan your entire codebase for free online. Abstracting away Bitbucket’s API Now lets look at the concrete implementation of the Bitbucket API client. To oversimplify, Bitbucket’s API documentation provides the guidance: To determine if the application has the permissions needed the clone the repository (we’re trying to analyzing the pull request, after all) and check if the application permission to review and comment, the API client must: - Authenticate as an app and get a payload showing the permissions. - The payload is a list that looks like ['pullrequest'] To approve a pull request In github’s API the API client must: - Authenticate as an app and get a Javascript Web Token - Perform a POST request to /repositories/{space}/{repo}/commit/{commit}/statuses/build/', exposing the JWT to the API in a header. To comment on a pull request the API must: - Authenticate as an app and get a Javascript Web Token - Perform a POST request to /repositories/{space}/{repository}/pullrequests/{number}/comments exposing the JWT to the API in a header. Putting this together we end up with a permission container and API client like: class BitbucketPermissions(client.AbstractPermissions): @property def has_permission_read_content(self): return "pullrequest" in self.permissions @property def has_permission_status_write(self): return "pullrequest" in self.permissionsclass BitbucketClient(client.AbstractClient): def approve_pull_request(self, commit_id): data = {"key": "code-review-doctor", "state": "SUCCESSFUL"} response = self.session.post( url=self.urls['status'], json=data ) response.raise_for_status() def comment_on_pull_request(self, comments, commit_id, body): for comment in comments: data = { "content": {"raw": comment['body']}, "inline": { "to": comment['line'], "path": comment['path'] }, } response = self.session.post( url=self.urls['comments'], json=data ) response.raise_for_status() High level overview of Polymorphism The scenario mentioned above is not new. This happens every day in software development. If a person took a 20000 FT view of this situation, it would not take him less than an hour to identify the root cause of this situation: Identical classes are not being grouped based on their characteristics and common functionalities This is where Polymorphism comes into play. According to Polymorphism: - Group classes of the same hierarchy or functionality. - Identify the characteristics and functionalities of these classes. - Create a base class that has these characteristics. - Derive child classes out of this parent class. - Override characteristics and functionalities in the derived classes. Fixing the above situation Now the team fixed the above situation by implementing a Polymorphic API client the following benefits materialized: - Code maintenance: The resulting code can be maintained over a long period without the restriction of more and more API Classes. Every time a new Git is added to the system, its class is created that is derived from the base ApiClientclass. - Improved readability: Developers could skip long if and switch loops. The compiler on the runtime decides which control routine to call. - Decoupled code: BitbucketClientdoes not interfere with GitHubClient, nor will GitHubClientinterferes with GitLabClient. Classes become separate and isolated in their functionality. While polymorphism comes with a lot of benefits, it can also have some cons. Because there are multiple implementations of a single class, it can lead to an added step in debugging the application development. During the reading of the code, the developer sometimes can be confused about what is happening here. Applying Polymorphism When you need to apply polymorphism, you will need to know the structure of your classes. There are two ways to organize your classes. The first one is to have a parent-child relationship also known as a hierarchical relationship. The second one is to implement the interfaces. Interfaces are not a Python thing so this will be skipped. To fully understand both these methods there are a few concepts involved: abstract classes and concrete classes. Other languages will probably use Interfaces too. Below we will see these concepts along with the methods to implement polymorphism. Abstract Classes In the above scenario AbstractApiClient is (no shock) an abstract class. The abstract class does not have any implementation of its own. It cannot be instantiated. Rather it is meant to be used by the derived classes to inherit the features of the abstract class. Concrete Classes The concrete classes are the classes that are derived from the base class or Abstract class. There can be any number of concrete classes from a single base class. In the scenario that we had seen above, GitHubClient, BitbucketClient, and GitLabClient are the concrete classes. They all inherit from the base abstract class called AbstractApiClient. Conclusion: it made the code a pleasure to maintain In this post, we saw what polymorphism is. Applying polymorphism can make the application code a lot more readable and maintainable. Further using polymorphism application developers can have a lot simpler code. This code can be easily maintainable and can be further extended by applying the necessary design patterns. If you have an application that you wish to maintain for a long time in the future, I will highly recommend going with polymorphism for a decoupled and logical application code.
https://codereviewdoctor.medium.com/smashing-python-tech-debt-with-polymorphism-364db4f52b54?source=user_profile---------1----------------------------
CC-MAIN-2022-40
en
refinedweb
global analytic equation solver Project description # RootLocker Project This module permit to simply solve analytic equations in the complex plan. It ensures that all the solutions, along with their order of multiplicity, are recovered. The algorithm used rely on the argument principle and therefore require at least C1 functions. ## Getting Started These instructions will get you a copy of the project up and running on your local machine for development and testing purposes. See deployment for notes on how to deploy the project on a live system. ### Prerequisites ` This module requires the ad module which provides automatic differentiation easily. ` ### Installing ` The easiest way to install this library is to use pip. ` ### Use There are two ways to use this module. In this mode, you can define a function which uses the admath library (loaded by default when importing rootlocker module) import rootlocker as rl \# Import rootlocker module along with admath module bounds= [complex(-2,-1),complex(8,3)] \# Define the domain of research def f(x): \# Define the function for wich roots are searched return rl.cos(x)+4.0\*rl.sin(2*x) sols,ms=solve(bounds,func=f,myerr=1e-3) \# Solve the equation f(x)=0 with the accuracy specified by myerr ms.plotRoots() \# Plot the solutions (uses matplotlib) ms.printStats() \# Prints statctics about the computation B. In order to avoid the use of the ad module, one ca provide an equationprovider by overriding the class EqProvider. For instance, one can directly use custom C or C++ code for the function and its derivative and this permits to significantly reduce the computational time required by the algorithm. ## Authors Maxence Miguel-Brebion - (IMFT) ## License This project is licensed under the MIT License - see the LICENSE.txt file for details ## Acknowledgments Laurent Selle, Thierry Poinsot and Emilien Courtine from The Institut de Mecanique des Fluides de Toulouse have greatly contributed to the algorithm behind the rootlocker module. Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/rootlocker/
CC-MAIN-2022-40
en
refinedweb
: import * as BABYLON from '@babylonjs/core/Legacy/legacy'; or individual classes to benefit from enhanced tree shaking using : import { Scene } from '@babylonjs/core/scene'; import { Engine } from '@babylonjs/core/Engines/engine'; To add a module, install the respective package. A list of extra packages and their installation instructions can be found on the babylonjs user on npm scoped on @babylonjs. UsageUsage See our ES6 dedicated documentation:-effects only imports allowing the standard material to be used as default. import "@babylonjs/core/Materials/standardMaterial"; // Side-effects only imports allowing Mesh to create default shapes (to enhance tree shaking, the construction methods on mesh are not available if the meshbuilder has not been imported). import "@babylonjs/core/Meshes/Builders/sphereBuilder"; import "@babylonjs/core/Meshes/Builders/boxBuilder"; import "@babylonjs/core/Meshes/Builders/groundBuilder"; const canvas = document.getElementById("renderCanvas") as HTMLCanvasElement; const engine = new Engine(canvas); var scene = new Scene(engine); // This creates and positions a free camera (non-mesh) var camera = new FreeCamera("camera1", new Vector3(0, 5, -10), scene); // This targets the camera to scene origin camera.setTarget(Vector3.Zero()); // This attaches the camera to the canvas camera.attachControl(canvas, true); // This creates a light, aiming 0,1,0 - to the sky (non-mesh) var light = new HemisphericLight("light1", new Vector3(0, 1, 0), scene); // Default intensity is 1. Let's dim the light a small amount light.intensity = 0.7; // Our built-in 'sphere' shape. Params: name, subdivs, size, scene var sphere = Mesh.CreateSphere("sphere1", 16, 2, scene); // Move the sphere upward 1/2 its height sphere.position.y = 2; // Our built-in 'ground' shape. Params: name, width, depth, subdivs, scene Mesh.CreateGround("ground1", 6, 6, 2, scene); engine.runRenderLoop(() => { scene.render(); });.
https://www.npmjs.com/package/@babylonjs/core
CC-MAIN-2022-40
en
refinedweb
As explained in When to Use a Custom Component (page 426), a component class defines the state and behavior of a UI component. The state information includes the component's type, identifier, and local value. The behavior defined by the component class includes the following: Decoding (converting the request parameter to the component's local value) Encoding (converting the local value into the corresponding markup) Saving the state of the component Updating the bean value with the local value Processing validation on the local value Queueing events The UIComponentBase class defines the default behavior of a component class. All the classes representing the standard components extend from UIComponentBase. These classes add their own behavior definitions, as your custom component class will do. Your custom component class must either extend UIComponentBase directly or extend a class representing one of the standard components. These classes are located in the javax.faces.component package and their names begin with UI. If your custom component serves the same purpose as a standard component, you should extend that standard component rather than directly extend UIComponentBase. For example, suppose you want to create an editable menu component. It makes sense to have this component extend UISelectOne rather than UIComponentBase because you can reuse the behavior already defined in UISelectOne. The only new functionality you need to define is to make the menu editable. Whether you decide to have your component extend UIComponentBase or a standard component, you might also want your component to implement one or more of these behavioral interfaces:.. If your component extends UIComponentBase, it automatically implements only StateHolder. Because all componentsdirectly or indirectlyextend UIComponentBase, they all implement StateHolder. If your component extends one of the other standard components, it might also implement other behavioral interfaces in addition to StateHolder. If your component extends UICommand, it automatically implements ActionSource2. If your component extends UIOutput or one of the component classes that extend UIOutput, it automatically implements ValueHolder. If your component extends UIInput, it automatically implements EditableValueHolder and ValueHolder. See the JavaServer Faces API Javadoc to find out what the other component classes implement. You can also make your component explicitly implement a behavioral interface that it doesn't already by virtue of extending a particular standard component. For example, if you have a component that extends UIInput and you want it to fire action events, you must make it explicitly implement ActionSource2 because a UIInput component doesn't automatically implement this interface. The image map example has two component classes: AreaComponent and MapComponent. The MapComponent class extends UICommand and therefore implements ActionSource2, which means it can fire action events when a user clicks on the map. The AreaComponent class extends the standard component UIOutput. MapRenderer, but MapComponent delegates this rendering to MapRenderer. AreaComponent is bound to a bean that stores the shape and coordinates of the region of the image map. You'll see how all this data is accessed through the value expression in Creating the Renderer Class (page 447). The behavior of AreaComponent consists of the following Retrieving the shape and coordinate data from the bean Setting the value of the hidden tag to the id of this component Rendering the area tag, including the JavaScript for the onmouseover, onmouseout, and onclick functions Although these tasks are actually performed by AreaRenderer, AreaComponent must delegate the tasks to AreaRenderer. See Delegating Rendering to a Renderer (page 446) for more information. The rest of this section describes the tasks that MapComponent performs as well as the encoding and decoding that it delegates to MapRenderer. Handling Events for Custom Components (page 449) details how MapComponent handles events. If your custom component class delegates rendering, it needs to override the getFamily method of UIComponent to return the identifier of a component family, which is used to refer to a component or set of components that can be rendered by a renderer or set of renderers. The component family is used along with the renderer type to look up renderers that can render the component. Because MapComponent delegates its rendering, it overrides the getFamily method: public String getFamily() { return ("Map"); } The component family identifier, Map, must match that defined by the component-family elements included in the component and renderer configurations in the application configuration resource file. Registering a Custom Renderer with a Render Kit (page 478) explains how to define the component family in the renderer configuration. Registering a Custom Component (page 480) explains how to define the component family in the component configuration. During the render response phase, the JavaServer Faces implementation processes the encoding methods of all components and their associated renderers in the view. The encoding methods convert the current local value of the component into the corresponding markup that represents it in the response. The UIComponentBase class defines a set of methods for rendering markup: encodeBegin, encodeChildren, and encodeEnd. If the component has child components, you might need to use more than one of these methods to render the component; otherwise, all rendering should be done in encodeEnd. Because MapComponent is a parent component of AreaComponent, the area tags must be rendered after the beginning map tag and before the ending map tag. To accomplish this, the MapRenderer class renders the beginning map tag in encodeBegin and the rest of the map tag in encodeEnd. The JavaServer Faces implementation automatically invokes the encodeEnd method of AreaComponent's renderer after it invokes MapRenderer's encodeBegin method and before it invokes MapRenderer's encodeEnd method. If a component needs to perform the rendering for its children, it does this in the encodeChildren method. Here are the encodeBegin and encodeEnd methods of MapRenderer: public void encodeBegin(FacesContext context, UIComponent component) throws IOException { if ((context == null)|| (component == null)){ throw new NullPointerException(); } MapComponent map = (MapComponent) component; ResponseWriter writer = context.getResponseWriter(); writer.startElement("map", map); writer.writeAttribute("name", map.getId(), "id"); } public void encodeEnd(FacesContext context) throws IOException { if ((context == null) || (component == null)){ throw new NullPointerException(); } MapComponent map = (MapComponent) component; ResponseWriter writer = context.getResponseWriter(); writer.startElement("input", map); writer.writeAttribute("type", "hidden", null); writer.writeAttribute("name", getName(context,map), "clientId");( writer.endElement("input"); writer.endElement("map"); } Notice that encodeBegin renders only the beginning map tag. The encodeEnd method renders the input tag and the ending map tag. The encoding methods accept a UIComponent argument and a FacesContext argument. The FacesContext instance contains all the information associated with the current request. The UIComponent argument is the component that needs to be rendered. The rest of the method renders, map). (Passing this information to the ResponseWriter instance helps design-time tools know which portions of the generated markup are related to which components.) After calling startElement, you can call writeAttribute to render the tag's attributes. The writeAttribute method takes the name of the attribute, its value, and the name of a property or attribute of the containing component corresponding to the attribute. The last parameter can be null, and it won't be rendered. The name attribute value of the map tag is retrieved using the getId method of UIComponent, which returns the component's unique identifier. The name attribute value of the input tag is retrieved using the getName(FacesContext, UIComponent) method of MapRenderer. If you want your component to perform its own rendering but delegate to a renderer if there is one, include the following lines in the encoding method to check whether there is a renderer associated with this component. if (getRendererType() != null) { super.encodeEnd(context); return; } If there is a renderer available, this method invokes the superclass's encodeEnd method, which does the work of finding the renderer. The MapComponent class delegates all rendering to MapRenderer, so it does not need to check for available renderers. In some custom component classes that extend standard components, you might need to implement other methods in addition to encodeEnd. For example, if you need to retrieve the component's value from the request parametersto, for example, update a bean's valuesyou must also implement the decode method. During the apply request values phase, the JavaServer Faces implementation processes the decode methods of all components in the tree. The decode method extracts a component's local value from incoming request parameters and uses: public void decode(FacesContext context, UIComponent component) { if ((context == null) || (component == null)) { throw new NullPointerException(); } MapComponent map = (MapComponent) component; String key = getName(context, map); String value = (String)context.getExternalContext(). getRequestParameterMap().get(key); if (value != null) map.setCurrent(value); } } The decode method first gets the name of the hidden input field by calling getName(FacesContext, UIComponent). It then uses that name as the key to the request parameter map to retrieve the current value of the input field. This value represents the currently selected area. Finally, it sets the value of the MapComponent class's current attribute to the value of the input field. Nearly all the attributes of the standard JavaServer Faces tags can accept expressions, whether they are value expressions or a method expressions. It is recommended that you also enable your component attributes to accept expressions because this is what page authors expect, and it gives page authors much more flexibility when authoring their pages. Creating the Component Tag Handler (page 450); } Because (page 445) for more information on specifying where state is saved in the deployment desciptor.
https://flylib.com/books/en/1.370.1.112/1/
CC-MAIN-2021-04
en
refinedweb
Reply Cyber Security Challenge 2020 writeups Last weekend we played the Reply Cyber Security Challenge 2020 and we solved the four challenges you find in this article. You can find files and exploits on our repository, at the url: Wells-read Category: Coding Points: 200 Solved by: 0xThorn Problem Description: R-Boy finds the time machine, which Zer0 used to travel to another era. Nearby, lies a copy of HG Wells’ The Time Machine. Could this book from the past help R-Boy travel in time? Unfortunately, R-Boy can’t read or understand it, because the space-time continuum has changed some of the words. Help R-Boy to start his time travels! Writeup Analysis We are provided with two files. At first glance, the first file ( The time machine...) looks like a book, but looking closely at the text you can see that some words have wrong characters. Instead the second file ( words.txt) contains a long list of words, practically a dictionary. Looking closely at the wrong characters in the first file, you notice that some are symbols, such as curly brackets. From there comes the intuition: the flag was hidden in the text by substituting characters in the original words! The script TL;DR you can find the solution script here - Imports the book by removing the new lines, double dashes and double spaces with open('The Time Machine by H. G. Wells.txt','r') as file: text = file.read().replace("\n", " ").replace('--', ' ').replace(' ', ' ') - Finds all the strings enclosed in two curly brackets (for the format flag) matches = re.findall(r'\{[^\{\}]*\}', text) - For each of the strings it is necessary to find the defects in the single words for result in matches: differences = '' for word in result.split(): differences += find_differences(word) print(differences) - We create the find_differencesfunction to find the defects in a word. First, punctuation characters are removed at the beginning and at the end beginning_punctation = ['"', "'"] end_punctation = [',', '.', '!', '?', '"', "'", ';', ":"] while len(word) > 1 and word[-1] in end_punctation: word = word[:-1] while len(word) > 1 and word[0] in beginning_punctation: word = word[1:] - Then it checks that the word is not already perfectly contained in the dictionary. Checks are made with lowercase words to avoid false differences due to capitalization. if word.lower() in [x.lower() for x in dictionary]: return '' - The comparison is made with each word in the dictionary, only if the length of the words matches. It does a character-by-character check and saves the differences. If there is only one different character, the search ends. for d in dictionary: if len(d) == len(word): if word[0].islower(): d = first_lower(d) else: d = first_upper(d) different_characters = '' for _ in range(len(word)): if d[_] != word[_]: different_characters += word[_] if len(different_characters) == 1: return different_characters return '' - Run the script and get multiple strings. Only one respects the flag format: {FLG:1_kn0w_3v3ryth1ng_4b0ut_t1m3_tr4v3ls} LimboZone -?-> LimboZ0ne Category: coding Points: 200 Solved by: drw0if, 0xThorn Problem At first, R-Boy discovers the ‘limbo zone’ where, caught in a trap, he meets Virgilia, a guide and protector of the temporal dimension. Virgilia has probably been trapped by Zer0, but R-Boy can release her by decrypting the code. Writeup We are provided with a 7z archive, decompressing it we got 4 files: - level_0.png - lev3l_0.png - ForceLevel.py - level_1.7z The .pngs are two seems equal pictures, the new archive is password protected and the python script contains: # ForceLevel.py def tryOpen(password): # TODO pass def main(): for x in range(0, 10000): for y in range(0, 10000): for r1 in range (0, 256): for g1 in range (0, 256): for b1 in range (0, 256): for r2 in range (0, 256): for g2 in range (0, 256): for b2 in range (0, 256): xy = str(x) + str(y) rgb1 = '{:0{}X}'.format(r1, 2) + '{:0{}X}'.format(g1, 2) + '{:0{}X}'.format(b1, 2) rgb2 = '{:0{}X}'.format(r2, 2) + '{:0{}X}'.format(g2, 2) + '{:0{}X}'.format(b2, 2) tryOpen(xy + rgb1 + rgb2) if __name__ == "__main__": main() So it is clear enough that in some weird way we must recover the password starting from the two pictures. We wrote a few python lines to loop the two pictures and find the only position whose pixels are different in the two images, we then applied the same composition from the provided script and we got a string, probably the archive password. We firstly tried to open the archive but we got wrong password... We applied the script again inverting che images so that first comes level_0.png and then lev3l_0.png. We got the right ones so we passed the level. 191186E2DBDB1B90CA Next level provided us two more images and another 7z archive, we applied the same logic and we got another password. We started automatizing out script so that it can achive that task alone. At some point our script started failing, we opened the images and without surprise images started to be transformed, mirrored around Y axis, X axis and so on. We implemented more algorithm and restarted the process from that point. In the end we implemented: - mirror Y - mirror X - mirror X and Y - rotate clockwise by 90° - rotate counter clockwise by 90° - rotate counter clockwise by 90° and mirror Y - rotate counter clockwise by 90° and mirror X The policy we applied to decide wich transformation to apply is that we try the mirror only if the two images have the same size and we apply rotations otherwise. Then we attempt to find the different pixel and print the password only if there is only one different pixel, otherwise we discard that transformation and attempt another one. So we chained a shell script to extract archive, cleanup the old ones (for memory reason) and call the python algorithm to process images. After 1024 iterations we got level_1024.txt file with the flag. Flag: {FLG:p1xel0ut0fBound3xcept1on_tr4p_1s_shutt1ng_d0wn} brute.py import sys from PIL import Image if len(sys.argv) < 3: print(f'{sys.argv[0]} file.png fil3.png') exit() def makePassword(x, y, r1, g1, b1, r2, g2, b2): xy = str(x) + str(y) rgb1 = '{:0{}X}'.format(r1, 2) + '{:0{}X}'.format(g1, 2) + '{:0{}X}'.format(b1, 2) rgb2 = '{:0{}X}'.format(r2, 2) + '{:0{}X}'.format(g2, 2) + '{:0{}X}'.format(b2, 2) return xy + rgb1 + rgb2 img1 = Image.open(sys.argv[1]) img2 = Image.open(sys.argv[2]) width, heigth = img1.size def findPassword(algorithm): passwords = [] for x in range(width): for y in range(heigth): rgb1 = img1.getpixel((x, y)) try: rgb2 = img2.getpixel(algorithm(x, y)) except: return None if rgb1 != rgb2: password = makePassword(x, y, *rgb1, *rgb2) passwords.append(password) if len(passwords) > 1: return None return passwords[0] transformations = [] normal = lambda x,y: (x,y) # Rotate around x axis rotateX = lambda x, y: (x, heigth - y - 1) # Rotate around y axis rotateY = lambda x, y: (width - x - 1, y) # Rotate around x and y axis rotateXY = lambda x,y: (width - x - 1, heigth - y - 1) # Rotate clockwise 90° rotateC90 = lambda x,y: (heigth - y - 1, x) # Rotate counterclockwise 90° rotateCC90 = lambda x,y: (y, width - x - 1) # Rotate counterclockwise 90° AND Mirror around X axis rotateCC90MirrorY = lambda x,y: (heigth - y - 1, width - x - 1) # Rotate counterclockwise 90° AND Mirror around X axis rotateCC90MirrorX = lambda x,y: (y, width - (width-x-1) - 1) # Mirroring if img1.size == img2.size: transformations += [ normal, rotateX, rotateY, rotateXY, ] # Rotating and more if img1.size[0] == img2.size[1]: transformations += [ rotateC90, rotateCC90, rotateCC90MirrorY, rotateCC90MirrorX ] for l in transformations: pwd = findPassword(l) if pwd: print(pwd) exit(0) exit(12) automatize.sh #/bin/sh i=0 while true; do password=$(python3 brute.py "level_$i.png" "lev3l_$i.png") if [ $? -eq 12 ] then echo "Implement new algorithm" exit 1 fi i=$(($i+1)) echo "$i -> $password" 7z x "level_$i.7z" -p$password > /dev/null if [ $? -eq 0 ] then rm "level_$i.7z" else echo "Failure" >&2 exit 1 fi done Poeta Errante Chronicles Category: Misc Points: 100 Solved by: staccah_staccah Problem Description: The space-time coordinates bring R-boy to Florence in 1320 AD. Zer0 has just stolen the unpublished version of the "Divine Comedy" from its real author, the "Wandering Poet", giving it to his evil brother, Dante. Help the "Wandering Poet" recover his opera and let the whole world discover the truth. Writeup We were provided with an ip and a port. When we connected to it with netcat we discovered that the challenge was about an old style text adventure in which we were forced to answer some questions in a specific way to keep going on Main challenge is composed by 3 sub-challenges: #1 Challenge It gaves us a weird document composed by hex characters (first_challenge.txt). When we cleaned the text we tried various conversions, the right one was unicode interpretation. The result is a QR code composed by unicode characters, translated it there was a basic ID card whose "address" field was the answer to our riddle: "Where are we going?": Via Vittorio Emanuele II, 21 #2 Challenge We had to guess a 4 digit code helped by some hints in form of a very known riddle: 4170 1 correct digit and in wrong position 5028 2 correct digits and in right position 7526 1 correct digit and in right position 8350 2 correct digits and in wrong position Insert the code: 3029 #3 Challenge Another hexdump was printed (third_challenge.txt), this time it was similar to network packets and the hint in the text: like sending a big, whole, message, but, instead, dived it in little p...ieces, and send them one at time... Confirmed it. So we imported the pcap with text2pcap and opened it in wireshark. There was a communication captured inside it and reading the packet in the right order we got the flag: {FLG:i7430prrn33743<} Maze Graph Category: web Points: 100 Solved by: drw0if Problem Now R-Boy can start his chase. He lands in 1230 BC during the reign of Ramses II. In the Valley of the Temples, Zer0 has plundered Nefertiti’s tomb to resell the precious treasures on the black market. By accident, the guards catch R-Boy near the tomb. To prove he’s not a thief, he has to show his devotion to the Pharaoh by finding a secret note. Writeup We are provided with the url gamebox1.reply.it/a37881ac48f4f21d0fb67607d6066ef7/, the corresponding page said that a /graphql url was present on the domain, so we went there. What we found was a graphiql instance so definitely we must face a graphql challenge. Graphql is an open-source data query and manipulation language for APIs so it is possible to retrieve and store data in a much easier way then writing and using REST policy. Thanks to graphiql we can discover easily the requests we can make, so we don't need to find out ourselves. In particular the following query were available: There were also some user-defined objects with the following structure: { "name": "RootQuery", "kind": "OBJECT", "fields": ["me", "allUsers", "user", "post", "allPublicPosts", "getAsset"] } { "name": "User", "kind": "OBJECT", "fields": ["id", "username", "firstName", "lastName", "posts"] } { "name": "Post", "kind": "OBJECT", "fields": ["id", "title", "content", "public", "author] } This could be found using the following __schema query: { __schema { types { name kind fields { name } } } } We firstly looped through all the public posts but nothing important was there. We then queried all the users and for each of them we retrieved their posts: { allUsers{ id username firstName lastName posts{ id content } }} and again nothing important was there. We then moved on to the post(id) query; we didn't know the id range but it was for sure an incremental numeration. So we decided to enumerate all the posts starting from 1 and for each of them we printed only the private one and only the ones whose content didn't started with "uselesess". With this enumeration we found a post with id = 40 with the content: { "data": { "post": { "id": "40", "title": "Personal notes", "content": "Remember to delete the ../mysecretmemofile asset.", "public": false } } } We knew we were near the end, so we used the getAsset API: { getAsset(name: "../mysecretmemofile") } And we got the flag: { "data": { "getAsset": "{FLG:st4rt0ffwith4b4ng!}\n" } }
https://r00tstici.unisalento.it/reply-ctf-2020/
CC-MAIN-2021-04
en
refinedweb
In a few of my controllers I have an action that does not have a corresponding route because it is accessed only via a render ... and return in other controller actions. For example, I have an action def no_such_page # displays a generic error screen end In my RSpec controller test, how do I 'get' that method and look at the response body? If I try: get :no_such_page response.status.should be(200) it of course gives the error No route matches {:controller=>"foo", :action=>"{:action=>:no_such_page}"} Update Looking back over your question, it doesn't make sense to me now since you say that you are only accessing this action via render ... and return, but render renders a view, not an action. Are you sure that you even need this action? I think a view spec is the place for this test. Original answer It doesn't make sense to test the response code of an action which will never be called via an HTTP request. Likewise get :no_such_page doesn't make sense as you can't "get" the action (there is no route to it), you can only call the method. In that sense, the best way to test it would be to treat it just like any other method on a class, in this case the class being your controller, e.g. PostsController. So you could do something like this: describe PostsController do ... other actions ... describe "no_such_page" do it "displays a generic error screen" do p = PostsController.new p.should_receive(:some_method).with(...) p.no_such_page end end end But in fact, judging from what you've written, it sounds to me like your action has nothing in it, and you're just testing the HTML output generated by the corresponding view. If that's the case, then you really shouldn't be testing this in controller specs at all, just test it using a view spec, which is more appropriate for testing the content of the response body.
https://www.codesd.com/item/in-rspec-how-to-test-a-controller-action-that-does-not-have-a-route.html
CC-MAIN-2021-04
en
refinedweb
Java - Tutorial Java is a high level, robust, object-oriented, class-based, concurrent, secured and general-purpose programming language. The syntax Java is very much similar to C and C++. It is used to create window-based applications, mobile applications, web applications and enterprise applications. Applications developed in Java are also called WORA (Write Once Run Anywhere). This implies that a complied Java application is expected to run on any other Java-enabled system without any adjustment. As of now, Java is one of the most popular programming languages in use particularly for client-server web applications. About Tutorial This tutorial is intended for students or professionals interested in studying basic and advanced concepts of Java. This tutorial covers all topics of Java which includes data types, operators, arrays, strings, control statements, functions, classes, object-oriented programming, constructor, inheritance, polymorphism, encapsulation, exception handling & File IO. We believe in learning by examples therefore each and every topic is explained with lots of examples that makes you learn Java in a very easy way. One of the classical example of "Hello World!" is mentioned below for the illustration purpose. // Hello World! Example // public class MyClass { public static void main(String[] args) { System.out.println("Hello World!."); } } The output of the above code will be: Hello World!. Prerequisite Before continuing with this tutorial, you should have basic understanding of C/C++ programming language or basic Java programming language. However, this tutorial is designed in such a way that anyone can start from scratch.
https://www.alphacodingskills.com/java/java-tutorial.php
CC-MAIN-2021-04
en
refinedweb
Hello. I am trying to insert a new item to a binary search tree but I can't get my code to work right. Here I am suppose to ask the user for the new class object's data and then store it in the binary search tree. No matter what I try to inert it outputs 'Can't have duplicate code', even if I know it is not there. This is my first time working with binary search trees and my class is accelerated so I don't have the chance to really take my time and learn it on my own like I usually do. I'm just confused on where to go from here and how I can fix my code. This is what I have in main. This is the code I am having trouble with: Code:int main() { BinarySearchTree<Stock> tree; Stock company[23]; //a class to hold data about stocks string name, symbol; double price = 0; //this is a menu driven program, i only the part of the code I am having trouble with else if (choice == INSERT) { cout << "Enter the new company's name: "; cin.ignore(); getline(cin, name); cout << "Enter the new company's symbol: "; getline(cin, symbol); cout << "Enter the new company's price: "; cin >> price; for (int i = 0; i < 23;) { //search for the next empty spot in the class array of objects while (company[i].getPrice() != 0) { i++; } if (company[i].getPrice() == 0) { company[i].setName(name); //the setters are in the class Stock company[i].setSymbol(symbol); company[i].setPrice(price); tree.insert(company[i]); //outputs 'can't have duplicate code' even if it is not in tree already } } } Here is what the code looks like to insert an item to the binary search tree. I know there is nothing wrong with this code because my teacher helped the class write it. This code is just here for clarification: Code:template <typename T> void BinarySearchTree<T>::insert(const T& item) { insert(root, item); } template <typename T> void BinarySearchTree<T>::insert(Node<T> *&r, const T& item) { using namespace std; if (r == nullptr) { r = new Node<T>; r->info = item; r->left = nullptr; r->right = nullptr; } else if (item < r->info) insert(r->left, item); else if (item > r->info) insert(r->right, item); else cout << "Can't have duplicate code.\n"; }
https://cboard.cprogramming.com/cplusplus-programming/179742-how-insert-class-object-binary-search-tree-post1298903.html?s=58202fa41deb00b7b430979630e6d1df
CC-MAIN-2021-04
en
refinedweb
Important: Please read the Qt Code of Conduct - Writing raw data to a file using QFile I have some data (uint8_t, uint32_t, float) that I want to write to a raw data file, without any extra information. What is the best way to do this? I've read the QDataStream appends every QByteArray with information like the size of the array. This is not what I want; I simply want a file that is only the bytes that I write, and will only be interpretable if someone knows the correct order and size of the data types being written. I have tried writing the the QFile directly, but that requires a const char * object. What is the correct way to convert all this data to a const char * without wasting any space? A side note that is not really related: I am encrypting this file, and I believe the write way to do this is to extend QFile and reimpliment the write() method to encrypt the data before writing it. Do you require a specific endianness? const char *QByteArray::constData() const I have tried writing the the QFile directly, but that requires a const char * object. But you will use the following overload: qint64 QIODevice::write(const QByteArray &byteArray) From your specifications, you will want to write directly to the file IMHO. You have no desire to share your data with anything else, so you will write & read it with your own code. Endian-ness might still be an issue if you want your own program to be cross-platform, e.g. write on one platform and use your own program to read it back on another platform, if you want that ability. @JonB Thank you, I didn't realize there was an overload for byte array. Say I have the following data: uint8_t status = getStatus(); uint8_t channel = getChannel(); float sourceVal1 = getSource1(); float sourceVal2 = getSource2(); uint32_t ticks = getTicks(); What is the correct way to transform my data into a byte array? The old style cast and static_cast<char> don't seem to do that job (I get a warning "implicit conversion changes signedness", which means this would not work, correct?) Endianess is not an issue as the platform will not change. @Smeeth I'm afraid I'm just the wrong to person to ask that! I'd go memcpy(char_bur, &status, sizeof(status)), or write a unionor use a (unsigned char *)cast. (I wouldn't get a "implicit conversion changes signedness", you shouldn't be accessing the intas an int, only its address. Or more like static_cast<char *>&status.) And make a QByteArrayfrom the charbuffer/pointer. (I'd really just go fwrite(&status, sizeof(status), 1, file_pointer), and the whole thing would be done in one line). None of which doubtless is at all allowed/encouraged now. So hopefully a C++ expert will offer some C++ or Qt friendly ways to get those bytes out of the variables.... :) Thank you for your help. I believe I saw someone else suggest using memcopy in another thread somewhere so I will pursue that route. Thanks! const char* charbytes{reinterpret_cast<const char*>(original_data)}; int size{static_cast<int>(sizeof(original_type))}; QByteArray bytes{charbytes, size}; Works the other way, too: auto myStruct = reinterpret_cast<MyStruct*>(bytearray.data()); // "The pointer remains valid as long as the byte array isn't reallocated or destroyed." The same structure in both ends is of course handy unless you need really variable data. In the latter case you have to create a complicated protocol and it would be better to use a higher level protocol. If you want to write raw data, use raw C functions :) #include <cstdio> FILE * file; file = fopen ("MyFile.dat", "wb"); fwrite (&status , 1, 1, file); fwrite (&channel , 1, 1, file); fwrite (&sourceVal1 , sizeof(float), 1, file); fwrite (&sourceVal2 , sizeof(float), 1, file); fwrite (&ticks , sizeof(ticks), 1, file); fclose (file); @mpergand OMG! But that's what I said. I assumed you'd be shot for that here! To be fair, I assumed the OP would want to use the QFileclass to do his writing. Isn't one of the points of Qt to use QFileetc. instead of the C/C++ stdio stuff? - aha_1980 Lifetime Qt Champion last edited by What about QFile::writeData()? - J.Hilk Moderators last edited by To get back to the original question, I think we derailed a bit with QByteArray @Smeeth said in Writing raw data to a file using QFile: I have some data (uint8_t, uint32_t, float) that I want to write to a raw data file, without any extra information. Never the less @aha_1980 is right,´qint64 QIODevice::write(const char *data, qint64 maxSize)´ is the way to go this, should do just fine, its untested however. QFile f (...); .... uint8_t var1; uint32_t var2; float var3; f.write(reinterpret_cast<const char *>(&var1), sizeof(uint8_t)); f.write(reinterpret_cast<const char *>(&var2), sizeof(uint32_t )); f.write(reinterpret_cast<const char *>(&var3), sizeof(float )); - mrjj Lifetime Qt Champion last edited by template< class aType > qint64 write( QFile& file, aType var ) { qint64 toWrite = sizeof(decltype (var)); qint64 written = file.write(reinterpret_cast<const char*>(&var), toWrite); if (written != toWrite) { qDebug () << "write error"; } qDebug () << "out: " << written; return written; } template< class aType > qint64 read( QFile& file, aType &var ) { qint64 toRead = sizeof(decltype (var)); qint64 read = file.read(reinterpret_cast<char*>(&var), toRead); if (toRead != read) { qDebug () << "read error"; } qDebug () << "in: " << read; return read; } QFile file("e:/test.txt"); if (!file.open(QFile::WriteOnly)) { return; } uint8_t var1 = 10; uint32_t var2 = 20000; float var3 = 10.8; write(file, var1); write(file, var2); write(file, var3); file.close(); var1 = 0; var2 = 0; var3 = 0; if (!file.open(QFile::ReadOnly)) { return; } read(file, var1); read(file, var2); read(file, var3); qDebug() << " result = " << var1 << " " << var2 << " " << var3; out: 1 out: 4 out: 4 in: 1 in: 4 in: 4 result = 10 20000 10.8 but file is fragile to platform change etc.
https://forum.qt.io/topic/96160/writing-raw-data-to-a-file-using-qfile
CC-MAIN-2021-04
en
refinedweb
dynamic, self-adjusting array of short More... #include <vtkShortArray.h> dynamic, self-adjusting array of short vtkShortArray is an array of values of type short. It provides methods for insertion and retrieval of values and will automatically resize itself to hold new data. The C++ standard does not define the exact size of the short type, so use of this type directly is discouraged. If an array of 16 bit integers is needed, prefer vtkTypeInt16Array to this class. Definition at line 39 of file vtkShortArray.h. Definition at line 42 of file vtkShort 60 of file vtkShortArray.h. Get the minimum data value in its native type. Definition at line 68 of file vtkShortArray.h. Get the maximum data value in its native type. Definition at line 73 of file vtkShortArray.h.
https://vtk.org/doc/nightly/html/classvtkShortArray.html
CC-MAIN-2021-04
en
refinedweb
Parse JSON from a local file in react-native: JSON or JavaScript Object Notation is a widely used format for transferring data. For example, a server can return data in JSON format and any frontend application (Android, iOS or Web application) can parse and use it. Similarly, an application can send the data to a server in the same JSON format. Sometime, we may need to test our application with local JSON files. If the server is not ready or if you want to test the app without interacting with the main server, you can use local JSON files. In this post, I will show you how to import a local JSON file and how to parse the values in a react native application. Create one react native app : You can create one react-native application by using the below command : npx react-native init MyProject Create one JSON file : By default, this project comes with a default template. We will place the JSON file in the root directory. Open the root directory, you will find one App.js file. Create one new JSON file example.json in that folder. Add the below content to this JSON file : { "name" : "Alex", "age" : 20, "subject" : [ { "name": "SubA", "grade": "B" }, { "name" : "SubB", "grade" : "C" } ] } It is a JSON object with three key-value pairs. The first key name is for a string value, the second key age is for a number value and the third key subject is for an array of JSON objects. We will parse the content of this array and show in the Application UI. JSON parsing : We can import one JSON file in any javascript file like import name from ‘.json’. Open your App.js file and change the content as like below : import React, { Component } from 'react'; import { Text, View } from 'react-native'; import exampleJson from './example.json' export default class App extends Component { render() { return ( {exampleJson.name} {exampleJson.age} {exampleJson.subject.map(subject => { return ( {subject.name} {subject.grade} ) })} ); } }; Explanation : Here, - We are importing the .json file as exampleJson. - We can access any value for that JSON file using its key name. - The third key, subject is a JSON array. We are iterating through the items of this array using map(). Output : Run it on an emulator and you will get one result like below :
https://www.codevscolor.com/react-native-parse-json
CC-MAIN-2021-04
en
refinedweb
meta data for this page Media Manager Namespaces Choose namespace Media Files - Media Files - Upload - Search Upload to mvpr:grasp Sorry, you don't have enough rights to upload files. File - Date: - 2011/11/30 18:42 - Filename: - home_examination_evaluation.pdf - Size: - 254KB - References for: - Nothing was found.
https://www.it.lut.fi/wiki/doku.php/start?tab_files=upload&do=media&tab_details=view&image=courses%3Act30a5000%3Ahome_examination_evaluation.pdf&ns=mvpr%2Fgrasp
CC-MAIN-2021-04
en
refinedweb
Creating a Python GUI using Tkinter Tkinter is the native Python GUI framework that comes bundled with the standard Python distribution. There are numerous other Python GUI frameworks. However, Tkinter is the only one that comes bundled by default. Tkinter has some advantages over other Python GUI frameworks. It is stable and offers cross-platform support. This enables developers to quickly develop python applications using Tkinter that will work on Windows, macOS, and Linux. Another benefit is that the visual elements created by Tkinter are rendered using the operating system’s native elements. This ensures that the application is rendered as though it belongs to the platform where it is running. Tkinter is not without its flaws. Python GUIs built using Tkinter appear outdated in comparison to other more modern GUIs. If you’re looking to build attractive applications with a modern look, then Tkinter may not be the best option for you. On the other hand, Tkinter is lightweight and simple to use. It requires no installation and is less of a headache to run as compared to other GUI frameworks. These properties make Tkinter a solid option for when a robust, cross-platform supporting application is required without worrying about modern aesthetics. Because of its low aesthetic appeal and easy of use, Tkinter is often used as a learning tool. In this tutorial, you will learn how to build a Python GUI using the Tkinter library. Table of Contents You can skip to a specific section of this Python GUI tutorial using the table of contents below: - Python GUI - A Basic Tkinter Window - Python GUI - Tkinter Widgets - Python GUI - Label in Tkinter - Python GUI - Entry in Tkinter - Python GUI - Text in Tkinter - Python GUI - Create Buttons in Tkinter - Python GUI - Working With Events - Final Thoughts By the end of this tutorial, you will have mastered Tkinter and will be able to efficiently use and position its widgets. You can test your skills by trying to build your own calculator using Tkinter. Let’s get down to it and start with creating an empty window. Python GUI - A Basic Tkinter Window Every Tkinter application starts with a window. More broadly, every graphical user interface starts with a blank window. Windows are containers that contain all of the GUI widgets. These widgets are also known as GUI elements and include buttons, text boxes, and labels. Creating a window is simple. Just create a Python file and copy the following code in it. The code is explained ahead and creates an empty Tkinter window. import tkinter as tk window = tk.Tk() window.mainloop() The first line of the code imports the tkinter module that comes integrated with the default version of Python installation. It is convention to import Tkinter under the alias tk. In the second line, we create an instance of tkinter and assigning it to the variable window. If you don’t include the window.mainloop() at the end of the Python script then nothing will appear. The mainloop() method starts the Tkinter event loop, which tells the application to listen for events like button clicks, key presses and closing of windows. Run the code and you’ll get an output that looks like this: Tkinters windows are styled differently on different operating systems. The above given output is generated when the Tkinter window is generated on Windows 10. It is important to note that you should not name the python file tkinter.py as this will clash with the Tkinter module that you are trying to import. You can read more about this issue here. Python GUI - Tkinter Widgets Creating an empty window is not very usefu. You need widgets to add some purpose to the window. Some of the main widgets supported by Tkiner are: - Entry: An input type that accepts a single line of text - Text: An input type that accepts multiple line of text - Button: A button input that has a label and a click event - Label: Used to display text in the window In the upcoming section, the functionality of each widget will be highlighted one by one. Note that these are just some of the main widgets of Tkinter. There are many more widgets that you can check out here, and some more advanced widgets here. Moving on, let’s see how a label works in Tkinter. Python GUI - Label in Tkinter Label is one of the important widgets of Tkinter. It is used for displaying static text on your Tkinter application. The label text is uneditable and is present for display purposes only. Adding a label is pretty simple. You can see an example of how to create a Tkinter label below: import tkinter as tk window = tk.Tk() lbl_label = tk.Label(text="Hello World!") lbl_label.pack() window.mainloop() Running this code will provide the following output: For reasons that I'll explain in a moment, this output is far from idea. Let's explain this code first. The variable lbl_label initializes a Tkinter label variable and is attached to the window by calling the pack() method. You can also change the background and text color. The height and the width of the label can be adjusted as well. To change the colors and configure the height and width, simply update the code as follows: lbl_label = tk.Label( text="Hello World!", background="green", foreground="red", width="10", height="10" ) Running the code will yield the following output: Fig 3: Configuring Tkinter Label Widget You may notice that the label box is not square despite the fact that the width and height have been set equal. This is because the length is measured by text units. The horizontal length is measured by the width of 0 (number zero) in the default system font and similarly, the vertical text unit length is determined by the height of the character 0. Next, let's explore how to accept user input in a Tkinter application. Python GUI - Entry Widgets in Tkinter The entry widget allows you to accept user input from your Tkinter application. The user input can be a name, an email address or any other information you'd like. You can create and configure an entry widget just like a label widget, as shown in the following code: import tkinter as tk window = tk.Tk() lbl_label = tk.Label( text="Hello World!", background="green", foreground="red", width="20", height="2" ) ent_entry = tk.Entry( bg="black", fg="white", width="20", ) lbl_label.pack() ent_entry.pack() window.mainloop() Running the code will yield the following output: You can read the input inserted by the user by using the get() method. A practical example of this method will be shown in the button section later in this Python GUI tutorial. Python GUI - Text in Tkinter The Tkinter entry widget is useful if you’re looking for a single line of input. If a response requires multiple lines then you can use the text widget of Tkinter. It supports multiple lines, where each line is separated by a newline character ‘\n’. You can create a text widget by adding the following code block in the code shown in the entry widget section: txt_text = tk.Text() txt_text.pack() Running the code after adding the code above will yield the following output: Python GUI - Create Buttons in Tkinter If you want your Tkinter application to serve any meaningful purpose, you will need to add buttons that perform some operation when they are clicked. Adding a button is pretty straightforward and similar to how we added the other widgets. You can add a simple button by adding the following code block: btn_main = tk.Button( master=window, text="Main Button" ) btn_main.pack() Running the code will yield the following output: Now that there’s a button, we can do some serious damage! The button generates an event that can be used for changing elements or performing other functionality. Python GUI - Working With Events For the purpose of this tutorial, functions will be kept simple. So, whenever the Main Button is clicked, whatever the user inputs in the entry widget will be pasted in the text and label widgets. The code is edited as follows to get achieve this functionality: import tkinter as tk def copyText(text): if(str(text)): textVar.set(text) txt_text.insert(tk.END, text) window = tk.Tk() textVar = tk.StringVar() textVar.set("Hello World!") lbl_label = tk.Label( textvariable=textVar, background="green", foreground="red", width="30", height="2" ) ent_entry = tk.Entry( bg="black", fg="white", width="30", ) txt_text = tk.Text() btn_main = tk.Button( master=window, text="Main Button", command = lambda: copyText(ent_entry.get()) ) lbl_label.pack() ent_entry.pack() txt_text.pack() btn_main.pack() window.mainloop() In this code, the copyText() method has been introduced. This method copies the text of the entry widget to the label and the text widgets. To change the text of the label, we introduced a stringVar and instead of setting the text of label, we set the textVariable equal to stringVar. Using the command statement, we set the button to call the copyText method whenever it is clicked. The entry widget’s text is passed to the method. In the copyText method, the first step is to check whether the entry widget is an empty string. Python makes it simple to do this as an empty string is considered a boolean false value in Python. After checking for the empty string, the value of the entry widget is copied to the stringVar and the text widget. The text has been inserted at the end of the text widget by setting its position as tk.END. It can be set to a particular index as well by replacing it with “1.0”, where ‘1’ is the line number and ‘0’ is the character. Executing the code will yield the following output: Final Thoughts Working with Python is fun and simple. It allows you to build cool applications pretty easily. Learning Tkinter allows you to build your first Python GUI. It’s simple, supports cross-platform compatibility and you can build many different applications with it. This tutorial is just a basic guide to Tkinter. There is much more available for you to learn about Tkinter. Learning about geometry managers should be your next step to improve your Tkinter skills. After working through this tutorial, you should have a basic understanding of Tkinter and how to use buttons to call functions, paving way for further exploration.
https://nickmccullum.com/python-gui-tkinter/
CC-MAIN-2021-04
en
refinedweb
Feature #7051 Extend caller_locations API to include klass and bindings. Allow caller_locations as a method hanging off Thread. Description The new caller_locations api allows one to get label, base_label, path, lineno, absolute_path. I feel this API should be extended some what and cleaned up a bit. - I think there should be complete parity with caller and backtrace APIs. As it stands caller_locations is a fine replacement for caller, however the same code responsible for caller is also responsible for Thread#backtrace. As it stands there is no equivalent Thread#backtrace_locations API that allows one to grab OO backtraces for a thread. For sampling profilers a common technique is to spin a thread that grabs stack traces from a profiled thread. I played around with some of this here: - I think there should be a way to get the class the method belongs to where possible. At the moment you need to use something like ruby2ruby to determine the class of a frame in a backtrace. Getting the class is actually very important for auto-instrumentation techniques. A general approach is to grab stack traces, determine locations that seemed to be called a lot, and then instrument them. Trouble is, without a class it is very hard to pick a point to instrument. I got this working with my stacktrace gem so I think it is feasible. - Ability to grab bindings up the call chain For some advanced diagnostic techniques it is very handy to grab bindings up the chain, that way you can print out local vars in a calling method and so on. It would be handy if this api exposed bindings. - Naming and longer term deprecation of caller The name caller_locations to me feels a bit unnatural, as an API it is superior in all ways to the current caller, yet is far harder to type or remember. I am not really sure what it should be named but caller feels a bit wrong. Also it feels very tied to MRI returning RubyVM:::Backtrace::Location , Location seems to me in the wrong namespace. Is JRuby and Rubinius going to be expected to add this namespace? Is this going to be spec? Has any though been given to longer term deprecation of 'caller'. - What about exceptions? Exception has no equivalent of caller_locations, for exceptions it makes much more sense to cut down on allocations and store a thinner stack trace object, in particular for the massively deep stacks frameworks like rails have. Then materialize the strings if needed on iteration through Exception#backtrace Also available in: Atom PDF
https://bugs.ruby-lang.org/issues/7051?tab=properties
CC-MAIN-2020-24
en
refinedweb
. DECISION CASANOVA, J : p This is a Petition for Review, filed on April 11, 2014 by petitioner Zuellig Pharma Corporation, praying for the refund of the amount of P477,269,935.23, allegedly representing its excess and unutilized creditable withholding taxes (CWT) for the calendar year (CY) ended December 31, 2011. On April 16, 2012, petitioner filed its Annual Income Tax Return (ITR) 5(5) for CY 2011 with the BIR, through the Electronic Filing and Payment System (eFPS). Petitioner then manually filed on April 30, 2012, its Annual ITR 6(6) for CY 2011 with the BIR, Large Taxpayers Assistance Division I together with a copy of its audited financial statements 7(7) for the year ended December 31, 2011. On June 13, 2013, petitioner filed with the BIR through eFPS its Amended Annual ITR 8(8) for CY 2011 using the new BIR Form No. 1702. In both its manually and electronically-filed Annual ITRs 9(9) for CY 2011 petitioner indicated its option to claim for refund of its excess and unutilized CWT for CY 2011. On April 13, 2012, petitioner filed with respondent a letter 10(10) signifying its intent to claim for refund of its excess and unutilized CWT for CY 2011. On April 1, 2014, petitioner filed with the BIR, Large Taxpayers Services, an Application for Tax Credits/Refunds (BIR Form No. 1914), 12(12) praying for the refund of its excess and unutilized CWT for CY 2011 in the amount of P477,269,935.23. Within the extended time granted by the Court, 13(13) respondent filed his Answer 14(14) on June 6, 2014, interposing the following Special and Affirmative Defenses: Copyright 2018 CD Technologies Asia, Inc. and Accesslaw, Inc. Philippine Taxation Encyclopedia Second Release 2018 2 affirmative defenses that: CAIHTE Copyright 2018 CD Technologies Asia, Inc. and Accesslaw, Inc. Philippine Taxation Encyclopedia Second Release 2018 3 10. In the foregoing case, it should be noted that nowhere in the petition did petitioner aver that it complied with the required submission of supporting documents to justify its claim for refund. Copyright 2018 CD Technologies Asia, Inc. and Accesslaw, Inc. Philippine Taxation Encyclopedia Second Release 2018 4 9) Xerox copy of used Tax Credit Certificate with annotation of issued TDM at the back, if applicable. a) current year/period b) previous year/period A. xxx Copyright 2018 CD Technologies Asia, Inc. and Accesslaw, Inc. Philippine Taxation Encyclopedia Second Release 2018 5 2. All persons claiming for refund or applying their creditable tax withheld at source against the tax due with more than ten (10) withholding agents-payor of income payment per return period are strictly required to submit SAWT electronically in 3.5 inch floppy diskette following the format to be prescribed by the BIR; 14. In its claim for refund for the calendar year 2011, petitioner clearly failed to submit the pertinent documents required pursuant to RMO No. 53-98 and R.R. 2-2006. a.) That the claim for refund was filed within the two year prescriptive period as provided under Section 204(c) in relation to Section 229 of the NIRC of 1997. c.) That the income upon which the taxes were withheld was included in the return of the recipient. 16. Petitioner must also prove that it did not carry-over the excess creditable withholding taxes against the Quarterly and Annual Income Tax Returns in the succeeding taxable years as provided under Section 76 of the 1997 Tax Code. Copyright 2018 CD Technologies Asia, Inc. and Accesslaw, Inc. Philippine Taxation Encyclopedia Second Release 2018 6 left blank. Petitioner did not shade the option to be refunded in Line 37 of the said return. Hence, it never signified that it chose to be refunded. Thereafter, the parties filed their Joint Stipulation of Facts and Issues 18(18) on September 29, 2014, which was approved and adopted by the Court in the Pre-Trial Order dated October 8, 2014. 19(19) Copyright 2018 CD Technologies Asia, Inc. and Accesslaw, Inc. Philippine Taxation Encyclopedia Second Release 2018 7 In support of its claim for refund, petitioner presented the following witnesses: Mr. Joel R. Ducut, 21(21) petitioner's Assistant Corporate Controller; and Ms. Katherine O. Constantino, 22(22) the Independent Certified Public Accountant (ICPA). Petitioner filed its Formal Offer of Evidence 23(23) on May 13, 2015, which was resolved by the Court in the Resolution dated September 15, 2015. 24(24) Petitioner, on the other hand, filed its Memorandum 31(31) on August 30, 2016. The case was then considered submitted for decision in the Resolution 32(32) dated September 6, 2016. aDSIHc The parties stipulated that the main issue 33(33) to be resolved in this case is: Copyright 2018 CD Technologies Asia, Inc. and Accesslaw, Inc. Philippine Taxation Encyclopedia Second Release 2018 8 as part of the revenues declared in petitioner's Annual ITR. As to the first requisite, Sections 204(C) and 229 of the NIRC of 1997, as amended, provide as follows: The present claim pertains to taxable CY 2011 for which petitioner filed its original Annual Income Tax Return (AITR) through the BIR's Electronic Filing and Payment System (EFPS) on April 16, 2012. 38(38) Counting from this date, petitioner had until April 16, 2014 within which to file a claim for refund of its 2011 excess CWT both in the administrative and judicial levels. Petitioner's letter-claim for refund 39(39) filed with the respondent on April 13, 2012, supplemental letter-claim for refund 40(40) and Application for Tax Credits/Refunds 41(41) filed with the BIR's Large Taxpayers Service on February 28, 2014 and April 1, 2014, respectively, as well as its subsequent appeal via a Petition for Review filed before this Court on April 11, 2014 fell within the two-year prescriptive period under Sections 204(C) and 229 of the NIRC of 1997,Copyright 2018 CD Technologies Asia, Inc. and Accesslaw, Inc. Philippine Taxation Encyclopedia Second Release 2018 10 as amended. With regard to the second and third requisites, Section 2.58.3 (B) of Revenue Regulations (RR) No. 02-98, as amended, states: Annex Reference Findings (Exh. "P-19") Income Payment Tax WithheldCWT duly supported by originalCertificate of Creditable Tax Withheldat Source (BIR Form 2307) Annex 1-a P43,249,700,267.27 P441,220,333.92CWT duly supported by originalCertificate of Creditable Tax Withheldat Source (BIR Form 2307) withincorrect Petitioner's TIN indicatedtherein Annex 1-b 847,817,242.41 9,822,185.21CWT duly supported by originalCertificate of Creditable Tax Withheldat Source (BIR Form 2307) with uncleardate Annex 1-c 2,752,283.81 24,449.26CWT supported by photocopiescertificate with correct name, TIN anddate Annex 1-d 743,610,625.61 7,545,434.24CWT certificates not yet provided 18,657,143.95Total P44,843,880,419.10 P477,269,546.58 Copyright 2018 CD Technologies Asia, Inc. and Accesslaw, Inc. Philippine Taxation Encyclopedia Second Release 2018 11 The Court noted that the amount of P477,269,546.58 is lower by P388.65 when compared with petitioner's claimed CWT of P477,269,935.23. Thus, the said discrepancy of P388.65 shall be disallowed from petitioner's claim for being unsupported. ETHIDa Payment Exhibit Customer Name IncomeCWT supported by BIR Forms No. 2307 but not in the name of the petitioner P-254 Argent Business Consultants and Stores Specialist, Inc. P1,400.00 P-365 Argent Business Consultants and Stores Specialist, Inc. 26,977.60CWT supported by BIR Forms No. 2307 but without signature of thepayor/payor's authorized representative P-3643 Majesty Pharmacy, Inc. 3,422,350.23 P-358746 Drugman Drug House 85,773.75 P-358771 Drugman Drug House 216,321.92 P-362567 New Sacred Heart Pharmacy-Dumaguete Branch 113,811.56 P-363929 Rufino Brodeth & Co., Inc./Luz Pharmacy 713,954.80CWT supported by BIR Forms No. 2307 but with different petitioner's TINindicated therein P-358233 Dagupan Doctors Villaflor Memorial Hospital, Inc. 1,009,389.69 P-358234 Dagupan Doctors Villaflor Memorial Hospital, Inc. 1,646,455.02 P-358235 Dagupan Doctors Villaflor Memorial Hospital, Inc. 3,415,526.61 P-358237 Dagupan Doctors Villaflor Memorial Hospital, Inc. 2,411,579.22 P-358244 Dagupan Doctors Villaflor Memorial Hospital, Inc. 1,268,977.16 P-361298 LTS Supermarkets, Inc. 23,564,047.00CWT supported by BIR Forms No. 2307 but dated outside the period of claim P-359666 First Lane Supertraders Co., Inc. 767,290.00 Copyright 2018 CD Technologies Asia, Inc. and Accesslaw, Inc. Philippine Taxation Encyclopedia Second Release 2018 12 P-359667 First Lane Supertraders Co., Inc. 316,718.00 P-359668 First Lane Supertraders Co., Inc. 199,794.00CWT supported by BIR Forms No. 2307 but without petitioner's TINindicated therein P-363038 Philhealthcare, Inc. 404,462.71 P-363039 Philhealthcare, Inc. 654,743.82 P-363040 Philhealthcare, Inc. 1,094,030.23 P-365467 Tai-Pan Dev't. Inc. Gaisano Capital San Carlos 16,327.00 P-365468 Tai-Pan Dev't. Inc. Gaisano Capital San Carlos 17,430.00Supporting BIR Forms No. 2307 which do not pertain to the claimed CWT P-361523 Mactan Doctors Hospital, Inc. 2,164,518.00 P-361524 Mactan Doctors Hospital, Inc. 1,810,106.00CWT supported by BIR Forms No. 2307 which were claimed twice bypetitioner P-1696; P-1697 Capitol Medical Center, Inc. 14,681,104.00 P-1490; P-4191 Professional Services, Inc. (The Medical City) 21,244,791.35 P-356821; P-356822 Cebu Fedco Marketing Corp. 790,203.00Total Additional Disallowed CWT P786,401.46 As to the third requisite, records disclose that the claimed CWT pertain to petitioner's sales of goods and services for the CY 2011. In ascertaining that the total amounts of sale of goods and services which were booked by petitioner in its sales register and general ledger (GL) for the CY 2011 tally with the sale of goods and services declared as taxable income per petitioner's AITR for the same year, the ICPA prepared the following reconciliations: "We compared the net sales per sales register generated by the Petitioner's BIR-registered system (to be presented as Exhibit P-31, Folder 12 to 77, Box 7 to 14), with the general ledger balances of total revenue for the year 2011 (previously presented as Exhibit P-25, Folder 5, Box 1), with the net sales reported in the audited financial statements for the year 2011 (previously presented as Exhibit P-5) and with the net sales amount of the amended annual income tax return for the CY 2011 (previously presented as Exhibit P-6). A detailed tabular presentation is presented in Annex 3. Summarized details follows: Copyright 2018 CD Technologies Asia, Inc. and Accesslaw, Inc. Philippine Taxation Encyclopedia Second Release 2018 13 Particulars Amount of sales Per sales register P57,915,958,134.46 Per general ledger of all sales 58,031,790,982.54 Per audited financial statements 57,985,824,408.00 Per amended annual income tax return 58,012,576,794.00 Differences a. Sales register and general ledger (P115,832,848.08) b. General ledger and audited financial statements 45,966,574.54 c. Audited financial statements and amended annual income tax return (26,752,386.00) d. Sales register and annual income tax return 96,618,659.54 General Ledger Account Code Account Name Balance 40101 Accrued Tax T/P (P217,250,159.30) 41202 Financial Discounts (83,778,665.26) 42001 Services T/P 399,261,189.10 42002 Services I/C 182,780.00 P98,415,144.54 Add adjustment for timing differences: Undelivered sales for CY 2010 reported as sales in GL 2011 (Exhibit 34, Folder 80, Box 1) Trade Returns for 2010 recorded in 2011 25,128,948.63 Exhibit 37, Folder 80, Box 1) Trade Returns of 2011 not yet recorded 11,162,033.18 in GL (Exhibit 35, Folder 80, Box 1) (18,471,733.37) Undelivered sales of 2011 recorded in 2012 (Exhibit 36, Folder 80, Box 1) (401,544.82) 17,417,703.62 115,832,848.16 Copyright 2018 CD Technologies Asia, Inc. and Accesslaw, Inc. Philippine Taxation Encyclopedia Second Release 2018 14 b. The difference of P45,966,574.54 between the general ledger and audited financial statements pertains to clinical trial management service income, recorded in the general ledger under Account code 42001 Service T/P account, which was treated as an outright reduction to non-operating expenses under Account Code 48605, Other N-OP inc/exp with GL balance amounting to P47,639,031.33 (P45,966,574.54 clinical trial management and P1,672,456.79 pertaining to others). Entry details of P45,966,574.54 were provided by the Petitioner (to be presented as Exhibit P-26, Folder 6, Box 1). Copyright 2018 CD Technologies Asia, Inc. and Accesslaw, Inc. Philippine Taxation Encyclopedia Second Release 2018 15 "Reconciliation between general ledger and amended annual income tax return as follows: SALES AMOUNTAccount Description per GL per ITR Difference 40001 VAT SALES T/P (64,838,920,385.00) (64,838,920,385.48) 40101 ACCRUED TAX T/P 217,250,159.30 190,497,773.43 26,752,385.87 40201 VAT ON SALES T/P 6,905,544,547.00 6,905,544,547.46 FINANCIAL 41202 DISCOUNTS 83,778,665.2 83,778,665.26 42001 SERVICES T/P (399,261,189.10) (353,294,614.94) (45,966,574.16) 42002 SERVICES I/C (182,780.00) (182,780.00)TOTAL (58,031,790,982.54) (58,012,576,794.27) (19,214,188.27) Accordingly, the difference of total sales per general ledger with the sales per amended annual income tax return of 19,214,188.27 was due to the following: Certainly, the total taxable sales of goods and services per petitioner's sales register and GL were the same amounts reported by petitioner in its 2011 AITR. SALE OF GOODS ICPA's Per BIR Form 2307 Computation of Sales Amount per Copyright 2018 CD Technologies Asia, Inc. and Accesslaw, Inc. Philippine Taxation Encyclopedia Second Release 2018 16 Income Payment Sales Register Difference Customer Name (A) Tax Withheld (B)INCOME PAYMENT WITH NO CORRESPONDING RECORDS IN THE SALES REGISTERBarili District Hospital 75,661.00 756.61 -Benguet ProvincialGovernment 18,670.00 166.70 -Cebu Doctors College Inc. 152,678.57 1,526.79 -Cypress ManufacturingLimited 1,045,200.00 10,452.00 -Daanbantayan DistrictHospital 1,618.00 16.18 -DOH-Regional Health Office— CAR 257,142.86 2,571.43 -Dynasty Management &Development Corp 75,382,233.93 735,017.49 -Emilus SupermarketSystems Inc. 49,894.69 498.95 -Everlink Distribution Group,Inc. 30,775.44 30,775.44 -Far East Noble House 38,638,763.36 383,286.13 - 38,638,763.36FMC Renalcare Corp. 431,125.00 4,311.25 -Gaisano Dadaingas Inc. 66,893.25 668.94 -Handyman Express Mart,Inc. 73,275.00 732.75 -Lion Commercial Corp. 294,308.02 2,943.08 -Provincial Government ofCavite 780,535.89 7,369.44 -Riviera Mercantile Sys., Inc. 4,044,159.00 40,441.59 -Sara Lee Philippines Inc. 89,285.71 1,785.71 -Shogun Management &Development Corp. 40,030,611.29 399,260.26 - 40,030,611.29Shopmore Commercial Corp. 245,319.64 2,453.18 -Waltermart Handyman, Inc. 124,625.00 1,246.25 -Subtotal 161,832,775.65 1,626,280.17 - 161,832,775.65 INCOME PAYMENT PER SALES REGISTER IS LESS THAN THE INCOME PAYMENT REFLECTED IN THE CWTCERTIFICATESAB Pharma Inc. 20,381,040.24 203,810.41 20,233,930.44Ace Hardware Phil's., Inc. 11,179,300.00 111,793.00 11,025,551.42Ace Med, Inc. 2,954,419.83 29,544.20 2,580,027.64Act General Merchandise &Drug, Inc. 144,780,830.81 1,447,808.32 144,670,718.92Adela Serra Ty MemorialMedical Center 691,370.57* 6,913.70 522,706.14Allah Valley MedicalSpecialists Center, Inc. 5,854,536.00 58,545.36 5,576,733.70Antipolo City Hospital, Inc. 1,473,385.94 14,733.87 1,356,312.03Asia Renal Care (Phils.) 1,468,286.22 14,682.86 516,492.10Asociacion Benevola DeCebu, Inc. (Chong HuaHospital) 214,552,762.01 2,145,527.65 206,176,602.34Benguet General Hospital 4,513,965.49* 45,148.57 4,252,494.34 Copyright 2018 CD Technologies Asia, Inc. and Accesslaw, Inc. Philippine Taxation Encyclopedia Second Release 2018 17Botica Real 34,709,394.08 336,657.61 10,312,376.74 24,397,017.34Bureau of Corrections 2,588,471.00 25,884.71 2,483,890.32Cagayan De Oro MedicalCenter Inc. 17,852,832.00 178,528.32 16,969,829.26Cagayan Valley MedicalCenter 10,912,837.60 232,944.64 9,053,476.24Cebu Far Eastern Drug Inc. 2,163,237.24 19,341.77 1,884,554.97Center For HealthDevelopment IV-B 229,906.03 2,052.73 86,284.32Christ The King MedicalServices, Inc. 11,544,341.00 115,443.41 356,523.16 11,187,817.84City Government of Pasig 45,690,051.00 456,900.51 18,256,901.26 27,433,149.74City Government of Bago 51,339.29 513.39 47,767.86City Government of Binan 1,571,429.00* 15,714.29 136,562.50City Government of Makati 29,713,385.71 297,133.86 21,333,043.66City Government ofMuntinlupa 2,008,928.57 20,089.29 1,169,709.18City SupermarketIncorporated 51,544,719.99 515,447.18 50,543,219.22Colinas Verdes HospitalManagers Corp. 123,829,568.00 1,238,295.68 119,035,961.00Commission on Audit 2,428,261.00 24,282.61 137,573.71Commission on Elections 160,039.20 1,453.03 155,012.68Complete Solution PharmacyAnd Gen. Mer. 91,798,776.00 917,987.76 76,510,309.92 15,288,466.08CVC Supermarket, Inc. 738,512.00 7,385.12 735,418.37Davao Adventist Hospital,Inc. 3,205,653.65 30,932.35 2,618,366.10Davao Central WarehouseClub, Inc. 10,101,256.00 101,012.65 1,671,797.18Davao Regional Hospital 30,485,949.18 304,859.50 30,352,181.41De Los Santos Med. Ctr.Diagnostic Corp. 15,232,920.78 152,329.21 14,163,576.80Dranix Distributor Inc. Iloilo 10,310,756.00 108,196.38 (1,206,667.24) 11,517,423.24E. Berlin Pharmacy 4,851,894.00 48,518.94 4,077,860.92Erlinda G. Germar/FarmaciaFatima & Fatima SodaFounta 15,451,062.84 154,510.62 10,535,625.73Ever Commonwealth Center,Inc. 44,764,014.47 447,640.07 36,906,409.12Evercare Pharmacy 296,745.57 3,631.49 206,136.46Everplus Superstore, Inc. 107,054,277.70 1,070,552.93 76,910,417.45 30,143,860.25Extract Sales Incorporated 102,659,130.47 1,026,591.32 78,820,886.01 23,838,244.46FEB Cuisine Corp. 3,331,046.11 30,110.01 3,010,996.18FEU-Dr. Nicanor ReyesMedical Foundation 8,130,608.00 81,306.08 8,031,427.66Gaisano Bros. Mdsg., Inc. 15,084,981.00 150,849.81 15,078,129.00Grand Union SupermarketInc., Operator 28,603,984.71 286,039.85 20,342,184.64Inter-Medical Unified System,Inc. 13,359,095.27 133,590.96 13,355,251.11ISS Facility Services Phil., Copyright 2018 CD Technologies Asia, Inc. and Accesslaw, Inc. Philippine Taxation Encyclopedia Second Release 2018 18Inc. 1,093,587.21 10,935.87 150,296.25Jemstar Trading Corporation 14,283,673.00 142,836.22 13,499,641.94Juliano, Susan G. 84,000.00 1,680.00 6,447.26K2 Drug & Medical Supplies 10,381,614.12 103,816.14 5,323,076.74Larrazabal, Susanna Ortega 108,432,225.46 1,084,545.61 106,784,231.16LTS Affiliates, Inc. 61,824,609.38 618,246.09 9,648,673.07 52,175,936.31Lucena United DoctorsIncorporated 12,455,928.00 124,559.28 12,333,856.68Manson Drug Corp. 347,337,544.82 3,473,375.69 246,640,598.58 100,696,946.24Marionnaud Philippines, Inc. 26,710,298.00 267,102.98 26,672,619.64Mary Johnston Hospital, Inc. 1,781,627.77 17,816.27 1,461,365.65Medical Mission GroupHospital & Health ServicesCooper 158,017.92 1,580.17 118,004.15Metropolitan PharmaceuticalProducts, Inc. 17,856.47 257.12 15,943.28Mindanao Sanitarium AdHospital 25,287,731.24 136,521.59 15,908,113.44Mount Carmel DiocesanGeneral Hospital 32,028,938.00 320,289.38 31,301,758.99Negros Union Drug Co., Inc. 30,342,354.11 303,423.53 29,755,646.84Norvic Drugs Corporation 54,556,608.66 545,566.09 54,500,131.63Nueva Ecija ProhealthInc./Heart of Jesus Hospital 300,785.66 3,007.87 (29,619.72)Oslob District Hospital 18,000.00 160.71 16,071.43Perpetual Succor Hospital &Maternity, Inc. 58,770,014.28 587,700.15 54,026,909.59Philex Mining Corporation 7,601,762.08 76,017.61 6,162,002.66Philippine Long DistanceTelephone Company 139,984,747.45 1,399,847.48 132,244,275.27Philippine Heart Center 294,471,060.57 3,776,666.97 137,624,178.74 156,846,881.83Province of Pampanga 2,678,348.21 26,783.48 2,187,098.21Provincial Government ofNegros Occidental 3,372,840.00 33,728.40 2,530,189.39Provincial Government ofNueva Vizcaya-NuevaVizcaya 314,065.00 2,804.15 283,640.18Provincial Government of Or.Neg. 1,868,680.08 17,421.92 1,486,520.11Puregold JuniorSupermarket, Inc. 109,971,961.92 1,099,719.66 60,160,645.38 49,811,316.54Research Institute ForTropical Medicine 18,780,384.92 187,737.62 16,784,499.05Right Choice Supermarket 330,641,059.97 3,306,387.59 325,551,286.74Rilem Pharma Corp. 233,134.67 2,331.34 233,134.60Rivera Medical Center, Inc. 24,567,322.17 244,230.89 18,957,477.40Roldan, KennethBautista-Netnet 6,343,721.00* 63,437.21 6,342,824.78Royal Duty-Free Shops, Inc. 10,635,565.00 106,355.73 10,472,530.83Rustan Supercenters Inc. 127,323,391.00 1,273,233.91 12,740,561.88 114,582,829.13San Pedro Doctors Hospital,Inc. 6,823,716.27 68,237.16 6,517,936.44 Copyright 2018 CD Technologies Asia, Inc. and Accesslaw, Inc. Philippine Taxation Encyclopedia Second Release 2018 19Super 8 Retail Systems, Inc. 29,549,624.00 295,496.24 28,405,611.84Suy Sing Commercial Corp 35,528,582.00 355,285.82 35,449,835.41The Landmark Corporation 47,922,257.00 479,222.57 43,927,289.23Vaduz Marketing, Inc. 165,275,664.38 1,652,756.64 161,868,769.29Veterans Memorial MedicalCenter 45,815,791.00 408,936.17 38,498,667.54Waltermart Supermarket,Inc. 63,669,284.14 636,691.95 52,213,056.54 11,456,227.60Watsons Personal CareStores (Phils.), Inc. 1,889,289,750.00 18,892,897.50 1,846,923,169.36 42,366,580.64Wing An Marketing Inc. 10,366,943.01 103,669.42 5,268,888.92Subtotal 5,409,204,731P19.51 54,868,554.21 4,601,962,418.33 807,241,953.18 SALE OF SERVICES Customer Name Per BIR Form 2307 Service Income Difference per General Income Payment Tax Withheld Ledger (A) (B)INCOME PAYMENT PER SALES REGISTER IS LESS THAN THE INCOME PAYMENT REFLECTED IN THE CWTCERTIFICATESAbbott Laboratories 237,571,740.90 5,526,721.13 25,002,625.57 212,569,115.3Alkem Laboratories, Inc. 159,777.76 3,195.56 137,450.19B. Braun Medical Supplies 6,581,894.42 88,666.62 917,383.47Bayer Philippines, Inc. 12,477,307.25 508,149.29 9,705,279.35Fresenius Kabi Philippines, 10,100,859.16 204,030.19 9,286,767.80Inc.Galderma Philippines, Inc. 1,334,800.50 26,696.01 185,091.04Glaxosmithkline Philippines, 75,573,915.50 1,417,195.47 32,490,214.39Inc.Msd (I.A) Corp Ph 22,417,495.87 448,349.93 12,335,460.20Novartis Healtcare 89,228,264.81 1,754,719.18 37,736,808.38Philippines, Inc.Oep Philippines, Inc. 3,497,199.98 69,180.59 3,319,125.72Pascual Laboratories, Inc. 55,910,118.00 559,101.18 5,747,749.48Pfizer, Inc. 80,051,571.00 1,587,316.28 27,605,441.69Reckitt Benckiser Healthcare 103,198,827.70 2,746,302.20 3,528,596.62(Phil), Inc.Sandoz Philippines Corp. 9,869,251.00 197,385.02 7,779,982.91Sanofi-Aventis Philippines, 19,920,254.50 398,405.09 12,322,163.26Inc.Sanofi Pasteur, Inc. 28,667,725.50 482,299.15 5,453,826.65Schering-Plough Corp. 59,966,008.98 1,199,269.09 4,342,434.71TOTAL P816,527,012.83 P17,216,981.98 P197,896,401.44 P618,630,611.3 Copyright 2018 CD Technologies Asia, Inc. and Accesslaw, Inc. Philippine Taxation Encyclopedia Second Release 2018 20 Consequently, petitioner's claimed CWT in the amounts of P10,121,236.91 and P13,090,532.87 related to the unverified sales of goods and services in the amounts of P969,074,728.83 and P618,630,611.39, respectively, shall be denied as follows: SDAaTC SALE OF GOODS ICPA's Computation Income Payment Deduction from of Sales Amount per per BIR Form Petitioner's CWT Tax Withheld Sales Register 2307 Customer Name (A) (B) (C) [D=A-(A x B/C)INCOME PAYMENT WITHNO CORRESPONDINGRECORD IN THE SALESREGISTERBarili District Hospital P756.61 - P75,661.00Benguet Provincial Government 166.70 - 18,670.00Cebu Doctors College, Inc. 1,526.79 - 152,678.57Cypress Manufacturing Limited 10,452.00 - 1,045,200.00Daanbantayan District Hospital 16.18 - 1,618.00DOH-Regional HealthOffice-CAR 2,571.43 - 257,142.86Dynasty Management &Development Corp 735,017.49 - 75,382,233.93Emilus Supermarket SystemsInc. 498.95 - 49,894.69Everlink Distribution Group, Inc. 30,775.44 - 30,775.44Far East Noble House 383,286.13 - 38,638,763.36FMC Renalcare Corp. 4,311.25 - 431,125.00Gaisano Dadaingas, Inc. 668.94 - 66,893.25Handyman Express Mart, Inc. 732.75 - 73,275.00Lion Commercial Corp. 2,943.08 - 294,308.02Provincial Government of Cavite 7,369.44 - 780,535.89Riviera Mercantile Sys., Inc. 40,441.59 - 4,044,159.00Sara Lee Philippines, Inc. 1,785.71 - 89,285.71Shogun Management &Development Corp. 399,260.26 - 40,030,611.29Shopmore Commercial Corp. 2,453.18 - 245,319.64Waltermart Handyman, Inc. 1,246.25 - 124,625.00Subtotal P1,626,280.17 - P161,832,775.65 Copyright 2018 CD Technologies Asia, Inc. and Accesslaw, Inc. Philippine Taxation Encyclopedia Second Release 2018 21Act General Merchandise &Drug, Inc. 1,447,808.32 144,670,718.92 144,780,830.81Adela Serra Ty Memorial MedicalCenter 6,913.70 522,706.14 774,335.04Allah Valley Medical SpecialistsCenter, Inc. 58,545.36 5,576,733.70 5,854,536.00Antipolo City Hospital, Inc. 14,733.87 1,356,312.03 1,473,385.94Asia Renal Care (Phils.) 14,682.86 516,492.10 1,468,286.22Asociacion Benevola De Cebu,Inc. (Chong Hua Hospital) 2,145,527.65 206,176,602.34 214,552,762.01Benguet General Hospital 45,148.57 4,252,494.34 5,055,641.35Botica Real 336,657.61 10,312,376.74 34,709,394.08Bureau of Corrections 25,884.71 2,483,890.32 2,588,471.00Cagayan De Oro Medical Center,Inc. 178,528.32 16,969,829.26 17,852,832.00Cagayan Valley Medical Center 232,944.64 9,053,476.24 10,912,837.60Cebu Far Eastern Drug, Inc. 19,341.77 1,884,554.97 2,163,237.24Center For Health DevelopmentIV-B 2,052.73 86,284.32 229,906.03Christ The King MedicalServices, Inc. 115,443.41 356,523.16 11,544,341.00City Government of Pasig 456,900.51 18,256,901.26 45,690,051.00City Government of Bago 513.39 47,767.86 51,339.29City Government of Binan 15,714.29 136,562.50 1,760,000.00City Government of Makati 297,133.86 21,333,043.66 29,713,385.71City Government of Muntinlupa 20,089.29 1,169,709.18 2,008,928.57City Supermarket Incorporated 515,447.18 50,543,219.22 51,544,719.99Colinas Verdes HospitalManagers Corp. 1,238,295.68 119,035,961.00 123,829,568.00Commission on Audit 24,282.61 137,573.71 2,428,261.00Commission on Elections 1,453.03 155,012.68 160,039.20Complete Solution PharmacyAnd Gen. Mer. 917,987.76 76,510,309.92 91,798,776.00CVC Supermarket, Inc. 7,385.12 735,418.37 738,512.00Davao Adventist Hospital, Inc. 30,932.35 2,618,366.10 3,205,653.65Davao Central Warehouse Club,Inc. 101,012.65 1,671,797.18 10,101,256.00Davao Regional Hospital 304,859.50 30,352,181.41 30,485,949.18De Los Santos Med. Ctr.Diagnostic Corp. 152,329.21 14,163,576.80 15,232,920.78Dranix Distributor, Inc. Iloilo 108,196.38 (1,206,667.24) 10,310,756.00E. Berlin Pharmacy 48,518.94 4,077,860.92 4,851,894.00Erlinda G. Germar/FarmaciaFatima & Fatima Soda Founta 154,510.62 10,535,625.73 15,451,062.84Ever Commonwealth Center, Inc. 447,640.07 36,906,409.12 44,764,014.47Evercare Pharmacy 3,631.49 206,136.46 296,745.57Everplus Superstore, Inc. 1,070,552.93 76,910,417.45 107,054,277.70Extract Sales Incorporated 1,026,591.32 78,820,886.01 102,659,130.47Feb Cuisine Corp. 30,110.01 3,010,996.18 3,331,046.11FEU-Dr. Nicanor Reyes MedicalFoundation 81,306.08 8,031,427.66 8,130,608.00 Copyright 2018 CD Technologies Asia, Inc. and Accesslaw, Inc. Philippine Taxation Encyclopedia Second Release 2018 22Gaisano Bros. Mdsg., Inc. 150,849.81 15,078,129.00 15,084,981.00Grand Union Supermarket Inc.,Operator 286,039.85 20,342,184.64 28,603,984.71Inter-Medical Unified System,Inc. 133,590.96 13,355,251.11 13,359,095.27ISS Facility Services Phil., Inc. 10,935.87 150,296.25 1,093,587.21Jemstar Trading Corporation 142,836.22 13,499,641.94 14,283,673.00Juliano, Susan G. 1,680.00 6,447.26 84,000.00K2 Drug & Medical Supplies 103,816.14 5,323,076.74 10,381,614.12Larrazabal, Susanna Ortega 1,084,545.61 106,784,231.16 108,432,225.46Lts Affiliates, Inc. 618,246.09 9,648,673.07 61,824,609.38Lucena United DoctorsIncorporated 124,559.28 12,333,856.68 12,455,928.00Manson Drug Corp. 3,473,375.69 246,640,598.58 347,337,544.82Marionnaud Philippines, Inc. 267,102.98 26,672,619.64 26,710,298.00Mary Johnston Hospital, Inc. 17,816.27 1,461,365.65 1,781,627.77Medical Mission Group Hospital& Health Services Cooper 1,580.17 118,004.15 158,017.92Metropolitan PharmaceuticalProducts, Inc. 257.12 15,943.28 17,856.47Mindanao Sanitarium AdHospital 136,521.59 15,908,113.44 25,287,731.24Mount Carmel Diocesan GeneralHospital 320,289.38 31,301,758.99 32,028,938.00Negros Union Drug Co., Inc. 303,423.53 29,755,646.84 30,342,354.11Norvic Drugs Corporation 545,566.09 54,500,131.63 54,556,608.66Nueva Ecija Prohealth Inc./Heartof Jesus Hospital 3,007.87 (29,619.72) 300,785.66Oslob District Hospital 160.71 16,071.43 18,000.00Perpetual Succor Hospital &Maternity, Inc. 587,700.15 54,026,909.59 58,770,014.28Philex Mining Corporation 76,017.61 6,162,002.66 7,601,762.08Philippine Long DistanceTelephone Company 1,399,847.48 132,244,275.27 139,984,747.45Philippine Heart Center 3,776,666.97 137,624,178.74 294,471,060.57Province of Pampanga 26,783.48 2,187,098.21 2,678,348.21Provincial Government of NegrosOccidental 33,728.40 2,530,189.39 3,372,840.00Provincial Government of NuevaVizcaya-Nueva Vizcaya 2,804.15 283,640.18 314,065.00Provincial Government of Or.Neg. 17,421.92 1,486,520.11 1,868,680.08Puregold Junior Supermarket,Inc. 1,099,719.66 60,160,645.38 109,971,961.92Research Institute for TropicalMedicine 187,737.62 16,784,499.05 18,780,384.92Right Choice Supermarket 3,306,387.59 325,551,286.74 330,641,059.97Rilem Pharma Corp. 2,331.34 233,134.60 233,134.67Rivera Medical Center, Inc. 244,230.89 18,957,477.40 24,567,322.17Roldan, Kenneth Bautista-Netnet 63,437.21 6,342,824.78 7,045,642.41Royal Duty-Free Shops, Inc. 106,355.73 10,472,530.83 10,635,565.00 Copyright 2018 CD Technologies Asia, Inc. and Accesslaw, Inc. Philippine Taxation Encyclopedia Second Release 2018 23Rustan Supercenters, Inc. 1,273,233.91 12,740,561.88 127,323,391.00San Pedro Doctors Hospital, Inc. 68,237.16 6,517,936.44 6,823,716.27Super 8 Retail Systems, Inc. 295,496.24 28,405,611.84 29,549,624.00Suy Sing Commercial Corp 355,285.82 35,449,835.41 35,528,582.00The Landmark Corporation 479,222.57 43,927,289.23 47,922,257.00Vaduz Marketing, Inc. 1,652,756.64 161,868,769.29 165,275,664.38Veterans Memorial MedicalCenter 408,936.17 38,498,667.54 45,815,791.00Waltermart Supermarket, Inc. 636,691.95 52,213,056.54 63,669,284.14Watsons Personal Care Stores(Phils.), Inc. 18,892,897.50 1,846,923,169.36 1,889,289,750.00Wing An Marketing, Inc. 103,669.42 5,268,888.92 10,366,943.01Subtotal P54,868,554.21 P4,601,962,418.33 P5,410,719,504.25 SALE OF SERVICES Customer Name Tax Withheld Service Income per Income Payment Deduction from (A) General Ledger per BIR Form Petitioner's CWT (B) 2307 Claim [D=A-(A x B/C (C)INCOME PAYMENT PERSALES REGISTER IS LESSTHAN THE INCOMEPAYMENT REFLECTED INTHE CWT CERTIFICATESAbbott Laboratories 5,526,721.13 25,002,625.57 237,571,740.90Alkem Laboratories, Inc. 3,195.56 137,450.19 159,777.76B. Braun Medical Supplies 88,666.62 917,383.47 6,581,894.42Bayer Philippines, Inc. 508,149.29 9,705,279.35 12,477,307.25Fresenius Kabi Philippines, 204,030.19 9,286,767.80 10,100,859.16Inc.Galderma Philippines, Inc. 26,696.01 185,091.04 1,334,800.50Glaxosmithkline Philippines, 1,417,195.47 32,490,214.39 75,573,915.50Inc.Msd (I.A) Corp Ph. 448,349.93 12,335,460.20 22,417,495.87Novartis Healthcare 1,754,719.18 37,736,808.38 89,228,264.81Philippines, Inc.Oep Philippines, Inc. 69,180.59 3,319,125.72 3,497,199.98Pascual Laboratories, Inc. 559,101.18 5,747,749.48 55,910,118.00Pfizer, Inc. 1,587,316.28 27,605,441.69 80,051,571.00Reckitt Benckiser Healthcare 2,746,302.20 3,528,596.62 103,198,827.70(Phil), Inc.Sandoz Philippines Corp. 197,385.02 7,779,982.91 9,869,251.00Sanofi-Aventis Philippines, 398,405.09 12,322,163.26 19,920,254.50Inc.Sanofi Pasteur, Inc. 482,299.15 5,453,826.65 28,667,725.50Schering-Plough Corp. 1,199,269.09 4,342,434.71 59,966,008.98TOTAL P17,216,981.98 P197,896,401.44 P816,527,012.83 P13,090,532.8 Copyright 2018 CD Technologies Asia, Inc. and Accesslaw, Inc. Philippine Taxation Encyclopedia Second Release 2018 24 In fine, petitioner complied with the three basic requisites for refund of excess CWT for taxable CY 2011 to the extent of only P417,222,162.68, computed as follows: In exercising its option, the corporation must signify in its annual corporate adjustment return (by marking the option box provided in the BIR form) its intention either to carry over the excess credit or to claim a refund. To facilitate tax collection, these remedies are in the alternative and the choice of one precludes the other. 47(47) Petitioner claims that its 2011 income tax due in the amount of P286,241,875.50 52(52) was paid using a portion of its prior year's excess credits of P606,744,200.10 leaving the prior year's excess credits in the amount of P320,502,324.60 (P606,744,200.10 less P286,241,875.50) and creditable taxes withheld during the year 2011 in the amount of P477,269,935.23 totalingCopyright 2018 CD Technologies Asia, Inc. and Accesslaw, Inc. Philippine Taxation Encyclopedia Second Release 2018 26 P797,772,259.83 53(53) unutilized as of December 31, 2011, as shown below: Exh. Taxable Income Tax Prior Year's Excess (Income Tax Still CWT for the Year Excess CWT Year Due Credits Due)/Balance of (a) (b) Prior Year's Excess Credits (c) = (b) less (a)"P-33-a" 2003 230,768,665.60 81,872,706.00 (148,895,959.60) 201,932,078.00"P-33-b" 2004 132,913,922.24 53,036,118.00 (79,877,804.24) 268,776,166.00 188,898,361.7"P-33-c" 2005 241,944,558.70 188,898,362.00 (53,046,196.70) 287,973,112.00 234,926,915.3"P-33-d" 2006 231,799,757.70 234,926,915.00 3,127,157.30 304,484,366.00 307,611,523.3"P-33-e" 2007 341,664,169.70 307,611,523.00 (34,052,646.70) 389,190,279.00 355,137,632.3"P-33-f" 2008 376,397,087.50 355,137,632.00 (21,259,455.50) 392,075,751.00 370,816,295.5"P-33-g" 2009 279,561,489.90 370,816,295.00 91,254,805.10 417,215,892.00 508,470,697.1"P-33-h" 2010 337,760,187.90 508,470,697.00 170,710,509.10 436,033,691.00 606,744,200.1 To prove the existence of its prior year's excess credits, petitioner submitted BIR Forms No. 2307. However, only those pertaining to taxable year 2010 in the total amount of P436,033,691.27, was accounted by the ICPA, to wit: As stated earlier, petitioner reflected in its 2011 AITR an income tax due of P286,241,875.50 which when offset against the prior year's (CY 2010) excess tax credits in the amount of P3,014,930.96, there still remains an income tax due of P283,226,944.54 which shall be deducted against the substantiated CWT of P417,222,162.68. Consequently, petitioner's unutilized excess CWT for CY 2011 amounted only to P133,995,218.14, computed as follows: SDHTEC In its original 2011 AITR 54(54) electronically filed on April 16, 2012, petitioner marked the option "To be refunded." Moreover, in its subsequent manual filing of the said return on April 30, 2012, petitioner also marked the option "To be refunded." 55(55) These, together with petitioner's submission of a letter-claim for refund 56(56) with the respondent's office on April 13, 2012, supplemental letter-claim for refund 57(57) and Application for Tax Credits/Refunds 58(58) with the BIR's Large Taxpayers Service on February 28, 2014 and April 1, 2014, respectively, and the fact that petitioner did not carry-over the present claim to the subsequent CY 2012 59(59) served as an expression of its choice to have the CWT for CY 2011 refunded. Copyright 2018 CD Technologies Asia, Inc. and Accesslaw, Inc. Philippine Taxation Encyclopedia Second Release 2018 28 WHEREFORE, premises considered, the instant Petition for Review is PARTIALLY GRANTED. Accordingly, respondent is ORDERED to REFUND in favor of petitioner the amount of P133,995,218.14, representing its excess and unutilized creditable withholding taxes for CY ending December 31, 2011. SO ORDERED. Copyright 2018 CD Technologies Asia, Inc. and Accesslaw, Inc. Philippine Taxation Encyclopedia Second Release 2018 29 Endnotes 1 (Popup - Popup) 1. Par. 1, Joint Stipulation of Facts and Issues (JSFI), Docket (Vol. I), p. 289. 2 (Popup - Popup) 2. Par. 4, JSFI, Docket (Vol. I), p. 290; Exhibit "P-2". 3 (Popup - Popup) 3. Par. 2, JSFI, Docket (Vol. I), p. 289. 4 (Popup - Popup) 4. Par. 3, JSFI, Docket (Vol. I), p. 290; Exhibit "P-1". 5 (Popup - Popup) 5. Exhibit "P-3". 6 (Popup - Popup) 6. Exhibit "P-4". 7 (Popup - Popup) 7. Exhibit "P-5". 8 (Popup - Popup) 8. Exhibit "P-6". 9 (Popup - Popup) 9. Exhibits "P-3", "P-4", and "P-6". 10 (Popup - Popup) Copyright 2018 CD Technologies Asia, Inc. and Accesslaw, Inc. Philippine Taxation Encyclopedia Second Release 2018 30 10. Exhibit "P-10". 11 (Popup - Popup) 11. Exhibit "P-11". 12 (Popup - Popup) 12. Exhibit "P-12". 13 (Popup - Popup) 13. Order dated May 6, 2014, Docket (Vol. I), p. 98. 14 (Popup - Popup) 14. Docket (Vol. I), pp. 99-106. 15 (Popup - Popup) 15. Notice of Pre-Trial Conference, Docket (Vol. I), p. 108. 16 (Popup - Popup) 16. Docket (Vol. I), pp. 109-113. 17 (Popup - Popup) 17. Docket (Vol. I), pp. 265-276. 18 (Popup - Popup) 18. Docket (Vol. I), pp. 289-296. 19 (Popup - Popup) 19. Docket (Vol. I), pp. 299-303. Copyright 2018 CD Technologies Asia, Inc. and Accesslaw, Inc. Philippine Taxation Encyclopedia Second Release 2018 31 20 (Popup - Popup) 20. Docket (Vol. I), pp. 304-307. 21 (Popup - Popup) 21. Exhibit "P-17", Sworn Statement of Mr. Joel R. Ducut to Questions Propounded by Atty. Gelina Rose E. Recio, and Supplemental Sworn Statement of Mr. Joel R. Ducut to Questions Propounded by Atty. Gelina Rose E. Recio, Docket (Vol. I), pp. 252-262 and 371-473, respectively; Minutes of the Hearing dated October 20, 2014 and April 13, 2015, respectively, Docket (Vol. I), p. 321 and 657. 22 (Popup - Popup) 22. Exhibit "P-6", Sworn Statement of Ms. Katherine O. Constantino to Questions Propounded by Atty. Gelina Rose E. Recio, Docket (Vol. I), pp. 351-366; Minutes of the Hearing dated March 18, 2015, Docket (Vol. I), p. 367. 23 (Popup - Popup) 23. Docket (Vol. II), pp. 662-680. 24 (Popup - Popup) 24. Docket (Vol. II), pp. 830-879. 25 (Popup - Popup) 25. Docket (Vol. II), pp. 882-893. 26 (Popup - Popup) 26. Docket (Vol. II), pp. 1018-1048. 27 (Popup - Popup) 27. Exhibit "R-9", Judicial Affidavit of Revenue Officer Ma. Theresa L. Espino, Docket (Vol. II), pp. 819-824; Minutes of the Hearing dated May 11, 2016, docket vol. II, p. 1050. Copyright 2018 CD Technologies Asia, Inc. and Accesslaw, Inc. Philippine Taxation Encyclopedia Second Release 2018 32 28 (Popup - Popup) 28. Docket (Vol. II), p. 1053-1060. 29 (Popup - Popup) 29. Docket (Vol. II), pp. 1070-1071. 30 (Popup - Popup) 30. Docket (Vol. II), pp. 1072-1075. 31 (Popup - Popup) 31. Docket (Vol. II), pp. 1085-1111. 32 (Popup - Popup) 32. Docket (Vol. II), p. 1112. 33 (Popup - Popup) 33. Issues, JSFI, Docket (Vol. I), p. 290. 34 (Popup - Popup) 34. Citibank N.A. vs. Court of Appeals, et al., G.R. No. 107434, October 10, 1997; ACCRA Investments Corporation vs. The Honorable Court of Appeals, et al., G.R. No. 96322, December 20, 1991; United International Pictures AB vs. Commissioner of Internal Revenue, G.R. No. 168331, October 11, 2012; Republic of the Philippines, represented by the Commissioner of Internal Revenue vs. Team (Phils.) Energy Corporation (formerly Mirant (Phils.) Energy Corporation), G.R. No. 188016, January 14, 2015; Section 2.58, Revenue Regulations No. 2-98, as amended. 35 (Popup - Popup) 35. G.R. No. 96322, December 20, 1991, 204 SCRA 957. 36 (Popup - Popup) Copyright 2018 CD Technologies Asia, Inc. and Accesslaw, Inc. Philippine Taxation Encyclopedia Second Release 2018 33 36. Commissioner of Internal Revenue vs. TMX Sales Inc., et al., G.R. No. 83736, January 15, 1992. 37 (Popup - Popup) 37. G.R. No. 162155, August 28, 2007. 38 (Popup - Popup) 38. Exhibit "P-3". 39 (Popup - Popup) 39. Exhibit "P-10". 40 (Popup - Popup) 40. Exhibit "P-11". 41 (Popup - Popup) 41. Exhibit "P-12". 42 (Popup - Popup) 42. Exhibit "P-23-a". 43 (Popup - Popup) 43. Exhibits "P-39" to "P-5043" and "P-354706" to "P-366467". 44 (Popup - Popup) 44. Exhibit "P-19", p. 7. 45 (Popup - Popup) 45. Philam Asset Management, Inc., vs. Commissioner of Internal Revenue, G.R. Nos. 156637/162004, December 14, 2005; Systra Philippines, Inc., vs. Commissioner of Internal Revenue, G.R. No. 176290, September 21, 2007.Copyright 2018 CD Technologies Asia, Inc. and Accesslaw, Inc. Philippine Taxation Encyclopedia Second Release 2018 34 46 (Popup - Popup) 46. Commissioner of Internal Revenue vs. Bank of the Philippine Islands, G.R. No. 178490, July 7, 2009. 47 (Popup - Popup) 47. Philippine Bank of Communications vs. Commissioner of Internal Revenue, et al., G.R. No. 112024, January 28, 1999. 48 (Popup - Popup) 48. Exhibit "P-6". 49 (Popup - Popup) 49. Exhibit "P-6", line 33R. 50 (Popup - Popup) 50. Exhibit "P-6", line 33A. 51 (Popup - Popup) 51. Exhibit "P-6", lines 33F and 33H. 52 (Popup - Popup) 52. Exhibit "P-6", line 32. 53 (Popup - Popup) 53. Exhibit "P-6", line 34B. 54 (Popup - Popup) 54. Exhibit "P-3", line 37. Copyright 2018 CD Technologies Asia, Inc. and Accesslaw, Inc. Philippine Taxation Encyclopedia Second Release 2018 35 55 (Popup - Popup) 55. Exhibit "P-4", line 37. 56 (Popup - Popup) 56. Exhibit "P-10". 57 (Popup - Popup) 57. Exhibit "P-11". 58 (Popup - Popup) 58. Exhibit "P-12". 59 (Popup - Popup) 59. Exhibits "P-13", "P-14", "P-15", line31A, "P-16", line 33A. Copyright 2018 CD Technologies Asia, Inc. and Accesslaw, Inc. Philippine Taxation Encyclopedia Second Release 2018.
https://de.scribd.com/document/413732985/Zuellig-vs-CIR
CC-MAIN-2020-24
en
refinedweb
Vector3 The 3d vector in World space. Transforms position from viewport space into world space. Viewport space is normalized and relative to the camera. The bottom-left of the viewport is (0,0); the top-right is (1,1). The z position is in world units from the camera. Note that ViewportToWorldPoint transforms an x-y screen position into a x-y-z position in 3D space.. using UnityEngine; using System.Collections; public class ExampleClass : MonoBehaviour { void OnDrawGizmosSelected() { Camera camera = GetComponent<Camera>(); Vector3 p = camera.ViewportToWorldPoint(new Vector3(1, 1, camera.nearClipPlane)); Gizmos.color = Color.yellow; Gizmos.DrawSphere(p, 0.1F); } }
https://docs.unity3d.com/es/2017.1/ScriptReference/Camera.ViewportToWorldPoint.html
CC-MAIN-2020-24
en
refinedweb
💡 The previous article introduced a new component, hystrix. Hystrix is a fuse, which can be used to solve the problem of service fuse and service degradation sent in microservice call. Spring Cloud cognitive learning (4): the use of fuse Hystrix 💡 This article introduces a new component Zuul, which is a gateway component. It receives Api requests uniformly. Based on the gateway, we can process all requests, log records, request authorization and other operations. zuul - zuul can be used as a micro service gateway. The so-called gateway, in fact, means that the service consumer accesses the service consumer through the gateway, just as we all leave the request to the gateway in the network environment, and then the gateway requests outward. - zuul is a member of Netfilx OSS, and can work well with other Netfilx OSS brothers, such as Eureka, ribbon, and hystrix. - In order to cope with zuul's update problem, the spring team also developed a gateway component, Spring Cloud Gateway. But as I said before, on the one hand, learning this is to understand a variety of knowledge, on the other hand, to avoid the need to take over zuul project or change zuul project to create a gateway without zuul knowledge. Zuul is not cool yet, is it effect: 💡 The gateway is used to hide the specific service URL. When the API interface is called externally, the gateway URL is used to avoid the disclosure of sensitive information of the service. 💡 Since all requests are through the gateway, identity authentication and authority authentication can be done in the gateway. 💡 Unified logging can be done in the gateway. 💡 It can do unified monitoring processing in the gateway. 💡 It can be combined with eureka, ribbon, etc. to achieve load balancing of requests. Simple example: The following code can be referred to zuul integrated use experiment 0. Create module 0. Create a new module spring-cloud-zuul-10001 1. Import dependency: <dependencies> <!--Introduce public dependency package start--> <dependency> <groupId>com.progor.study</groupId> <artifactId>spring-cloud-common-data</artifactId> <version>1.0-SNAPSHOT</version> </dependency> <!--Introduce public dependency package end--> <!--Import zuul--> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-netflix-zuul</artifactId> </dependency> <!--Import eureka--> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId> </dependency> <!--Import actuator--> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> <!--Import hystrix--> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-netflix-hystrix</artifactId> </dependency> <!--Import web relevant--> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-jetty</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> </dependencies> 2. Add notes to the main program: @SpringBootApplication @EnableZuulProxy @EnableEurekaClient // You need to add eureka, because he also needs to pull the service list public class SpringCloudZuul10001Application { public static void main(String[] args) { SpringApplication.run(SpringCloudZuul10001Application.class, args); } } 3. Configure application.yml: server: port: 10001 spring: application: name: spring-cloud-zuul # For load balancing and pull services, you need to configure eureka eureka: client: service-url: defaultZone: instance: instance-id: zuul-10001.com prefer-ip-address: true # Configure zuul zuul: routes: # Configure routing UserService.serviceId: USERSERIVE # service name UserService.path: /myuser/** # What is used to map locally? For example, / user/list will map to / myuser/user/list locally 4. test 🔴 Start spring-cloud-user-service-8001, spring-cloud-zuul-10001, spring-cloud-eureka-server-7001, 🔴 Visit, you will find that the effect is the same as that of calling, so the above configuration is successful. We call the 8001 service through Zul's 10001 as a gateway, so later consumers can call the gateway interface to indirectly call the service interface. Configuration syntax: route zuul: routes: # Configure routing UserService.serviceId: USERSERIVE # serviceId is used for service name. The UserService. Prefix is used for group mapping and can be a custom prefix. UserService.path: /myuser/** # What is used to map locally? For example, / user/list will map to / myuser/user/list locally It can also be: zuul: routes: # Configure routing UserService: serviceId: USERSERIVE # service name path: /myuser/** # What is used to map locally? For example, / user/list will map to / myuser/user/list locally MessageService: serviceId: MESSAGESERIVE # service name path: /mymsg/** # 💡 Ignored services: by default, zuul will image all services that can be pulled. When there is a MessageService in eureka, but we have not configured it, you can also access the MessageService by visiting http: / / localhost: 10001 / MessageService / MSG / list, because the default mapping is / service name. If you don't need the default mapping, just use the mapping you configured manually, you need to configure zuul. Ignored services = '*'. 💡 Prefix: used to specify a global prefix, for example, / mymsg / * *. If you have configured zuul.prefix=/api, you need to call / api/mymsg / * * later to access the interface of messageservice. # Configure zuul zuul: prefix: /api routes: # Configure routing UserService: serviceId: USERSERIVE # service name path: /myuser/** # What is used to map locally? For example, / user/list will map to / myuser/user/list locally ignored-services: '*' Add: - As a gateway, you can do a lot of functions in the gateway layer. More content will be explained in a separate chapter zuul. Here is just a simple explanation of the most basic role of a gateway.
https://programmer.group/spring-cloud-cognitive-learning-the-use-of-zuul-gateway.html
CC-MAIN-2020-24
en
refinedweb
Django Tutorial using Visual Studio Django is a high-level Web development framework specially designed for Python developer, it provides a ready to set of clean templates to start rapid web development, Django is free and open source, so you can customise your web app as per need. Read more about Django python Framework Fortunately in visual studio 2017 (and above) we get integrated Django python Framework template to quickly start with! Let’s open visual studio, select Python => Django Web Project Once you create the project, you will get following ready to use Django web template included in the project, so before you run the project let’s understand the folder structure and files in the project If you have any web development experience in any other languages, that will be definitely an advantage, if not, nothing to worry, you can learn easily now! Remember all view(page) you create, must be registered (defined) in views.py file as shown below, Define all Django views in app/views.py file file, from django.shortcuts import render from django.http import HttpRequest from django.template import RequestContext from datetime import datetime def home(request): """Renders the home page.""" assert isinstance(request, HttpRequest) return render( request, 'app/index.html', { 'title':'Home Page', 'year':datetime.now().year, } ) Notice how to add a master page / layout page reference in any view (page), we have created a new page, where we want to add "layout.html" as a master page. use "extends" to inherit the master page layout {% extends "app/layout.html" %} In python view we can add any plain non-python content within a block {% block content %}, Whenever you start a block make sure you close the block with {% endblock %} statement,This is actually content placeholder defined in master page. Here is how to add content in any python view {% block content %} <h2>{{ variableName1 }}.</h2> <h3>{{ variableName2 }}</h3> <p>any plain content can be written here, content like text, html, images everything you need to make your webpage meaningful.</p> {% endblock %} How to include any partial view in a view or in master page {% include 'app/loginpartial.html' %} How to add any JavaScript file or CSS file reference in your python project, notice {% static 'file path' %} <link rel="stylesheet" type="text/css" href="{% static 'app/content/site.css' %}" /> <script src="{% static 'app/scripts/modernizr-2.6.2.js' %}"></script> This is how you can define content place holder in layout page or master page, just define it empty. {% block content %}{% endblock %} Like content place holder, we can define optional script placeholder in python layout page {% block scripts %}{% endblock %} Settings.py class (in root of Django Web Project) contains all application settings like list of installed app, middleware classes, database connection details, static folder path, time zone, default language etc. But before you configure database connection details, make sure you have installed right database package in your python project environment.
https://www.webtrainingroom.com/python/django-project
CC-MAIN-2020-24
en
refinedweb
Opened 4 years ago Closed 4 years ago Last modified 23 months ago #26428 closed New feature (fixed) Add support for relative path redirects to the test Client Description Consider this definition, in a reusable app: url(r'^inbox/$', InboxView.as_view(), name='inbox'), url(r'^$', RedirectView.as_view(url='inbox/')), And urlpatterns = [ url(r'^messages/', include((app_name_patterns, 'app_name'), namespace='app_name')), ] This was working on 1.8, thanks to http.fix_location_header, which converts the url to something like. 1.9 introduced the "HTTP redirects no longer forced to absolute URIs" change and the fix has disappeared, the reason being "... allows relative URIs in Location, recognizing the actual practice of user agents, almost all of which support them.". Unfortunately, this is not fully the case of test.Client. It doesn't support relative-path reference - not beginning with a slash character, but only absolute-path reference - beginning with a single slash character (ref). A GET to inbox/ leads to 404. I'm using a workaround with ... url=reverse_lazy('app_name:inbox') ..., referred by a TestCase.urls attribute, to produce /messages/inbox/, but I'm not happy with this hardcoded namespace. Attachments (2) Change History (15) comment:1 Changed 4 years ago by Changed 4 years ago by comment:2 follow-up: 3 Changed 4 years ago by Does this patch fix your issue? If not, could you provide a sample of your test code so we understand exactly what you're doing? Thanks. comment:3 Changed 4 years ago by The patch does fix my issue but a simple concatenation doesn't cover all cases. Suppose your test scenario as: url(r'^accounts/optional_extra$', RedirectView.as_view(url='login/')), with: response = self.client.get('/accounts/optional_extra') I think a more appropriate code is: # Prepend the request path to handle relative-path redirects. if not path.startswith('/'): url = urljoin(response.request['PATH_INFO'], url) path = urljoin(response.request['PATH_INFO'], path) comment:4 follow-up: 5 Changed 4 years ago by Thanks for the feedback. I updated the pull request for your suggestion and added some release notes for 1.9.6. comment:5 Changed 4 years ago by comment:6 Changed 4 years ago by comment:7 Changed 4 years ago by Changed 4 years ago by Also fixes Client.get() comment:8 Changed 4 years ago by Though the fix solves the original problem i.e. assertRedirects(), the summary of the ticket suggests broader support. I've attached a patch based on the original fix, that also fixes Client.get() / Client.post() etc. when follow=True. comment:9 Changed 4 years ago by comment:10 Changed 4 years ago by comment:11 Changed 4 years ago by comment:12 Changed 23 months ago by This issue was not fully addressed. There are still a couple problems: 1) It assumes that a relative redirect does not start with '/'. But what if you redirect relative to host with a '/', as in redirecting to '/subpath' instead of ''? 2) It does not set the secure kwarg on the client.get call based on the original host scheme, because urlsplit is called only on the relative path. It seems that urlsplit should be called after building an absolute url. Should this ticket be re-opened, or a separate ticket created? comment:13 Changed 23 months ago by New ticket, please, as the fix for this one has been released for years. I think we should try to fix it in 1.9 given the regression nature of the report. A test is attached.
https://code.djangoproject.com/ticket/26428
CC-MAIN-2020-24
en
refinedweb
Create a new pager object. #include <zircon/syscalls.h> zx_status_t zx_pager_create(uint32_t options, zx_handle_t* out); zx_pager_create() creates a new pager object. When a pager object is destroyed, any accesses to its vmos that would have required communicating with the pager will fail as if zx_pager_detach_vmo() had been called. Furthermore, the kernel will make an effort to ensure that the faults happen as quickly as possible (e.g. by evicting present pages), but the precise behavior is implementation dependent. None. zx_pager_create() returns ZX_OK on success, or one of the following error codes on failure. ZX_ERR_INVALID_ARGS out is an invalid pointer or NULL or options is any value other than 0. ZX_ERR_NO_MEMORY Failure due to lack of memory. zx_pager_create_vmo() zx_pager_detach_vmo() zx_pager_supply_pages()
https://fuchsia.googlesource.com/fuchsia/+/419b51fe8a82d81b63b0e67951ec6e224c2194f7/zircon/docs/syscalls/pager_create.md
CC-MAIN-2020-24
en
refinedweb
Opened 4 years ago Closed 4 years ago Last modified 4 years ago #26267 closed Bug (fixed) Template filter "slice" raises TypeError on bound RadioSelect Description I have a template with the following snippet: {% for choice in form.choices|slice:'1:' %} <li>{{ choice }}</li> {% endfor %} Where choices is a ChoiceField with a RadioSelect widget. This stopped working in 1.8. Looking through the code history, this looks like it was introduce in commit 5e06fa14 during work on ticket #22745. The change to BoundField.__getitem__(): def __getitem__(self, idx): + # Prevent unnecessary reevaluation when accessing BoundField's attrs + # from templates. + if not isinstance(idx, six.integer_types): + raise TypeError return list(self.__iter__())[idx] It no longer accepts a slice and will instead raise a TypeError. Change History (7) comment:1 Changed 4 years ago by comment:2 Changed 4 years ago by comment:3 Changed 4 years ago by Moved PR <>. comment:4 Changed 4 years ago by comment:5 Changed 4 years ago by comment:6 Changed 4 years ago by comment:7 Changed 4 years ago by Note: See TracTickets for help on using tickets.
https://code.djangoproject.com/ticket/26267
CC-MAIN-2020-24
en
refinedweb
Security The two main security components you will use with the Python driver are Authentication and SSL. Authentication Versions 2.0 and higher of the driver support a SASL-based authentication mechanism when protocol_version is set to 2 or higher. To use this authentication, set auth_provider to an instance of a subclass of AuthProvider. When working with Cassandra’s PasswordAuthenticator, you can use the PlainTextAuthProvider class. For example, suppose Cassandra is setup with its default ‘cassandra’ user with a password of ‘cassandra’: from cassandra.cluster import Cluster from cassandra.auth import PlainTextAuthProvider auth_provider = PlainTextAuthProvider(username='cassandra', password='cassandra') cluster = Cluster(auth_provider=auth_provider, protocol_version=2) When working with version 2 or higher of the driver, the protocol version is set to 2 by default, but we’ve included it in the example to be explicit. Custom Authenticators If you’re using something other than Cassandra’s PasswordAuthenticator, SaslAuthProvider is provided for generic SASL authentication mechanisms, utilizing the pure-sasl package. If these do not suit your needs, you may need to create your own subclasses of AuthProvider and Authenticator. You can use the Sasl classes as example implementations. Protocol v1 Authentication When working with Cassandra 1.2 (or a higher version with protocol_version set to 1), you will not pass in an AuthProvider instance. Instead, you should pass in a function that takes one argument, the IP address of a host, and returns a dict of credentials with a username and password key: from cassandra.cluster import Cluster def get_credentials(host_address): return {'username': 'joe', 'password': '1234'} cluster = Cluster(auth_provider=get_credentials, protocol_version=1) SSL To enable SSL you will need to set Cluster.ssl_options to a dict of options. These will be passed as kwargs. For example: from cassandra.cluster import Cluster from ssl import PROTOCOL_TLSv1 ssl_opts = {'ca_certs': '/path/to/my/ca.certs', 'ssl_version': PROTOCOL_TLSv1} cluster = Cluster(ssl_options=ssl_opts) For further reading, Andrew Mussey has published a thorough guide on Using SSL with the DataStax Python driver.
https://docs.datastax.com/en/developer/python-driver/3.8/security/
CC-MAIN-2020-24
en
refinedweb
Remove all llvm:: prefixes in FileCheck library header and implementation except for calls to make_unique since both files already use the llvm namespace. Details Details Diff Detail Diff Detail Event Timeline Comment Actions LGTM, although I commonly see llvm::make_unique to document that we are specifically not using the std:: one. On occasion it is actually ambiguous.
https://reviews.llvm.org/D62323?id=200997
CC-MAIN-2020-24
en
refinedweb
In this lesson we will be writing a component to manage user input. We will work with Unity’s Input Manager so that your game should work across a variety of input devices (keyboard, controller, etc). The component we write will be reusable so that any script requiring input can receive and act on these events. Custom Event Args It is common to use an EventHandler when posting an event. Using this delegate pattern you must pass along the sender of the event, and an EventArgs (or subclass) as well. When we post input events, it is handy to pass along information such as what button was pressed, or what direction is the user trying to apply. Most of the time, all I ever need to pass is a single field of data. Rather than creating a custom subclass of EventArgs for each occasion, we can create a generic version. Create a new folder within the Scripts folder called EventArgs. Then create a script within this folder called InfoEventArgs and use the following implementation: using UnityEngine; using System; using System.Collections; using System.Collections.Generic; public class InfoEventArgs<T> : EventArgs { public T info; public InfoEventArgs() { info = default(T); } public InfoEventArgs (T info) { this.info = info; } } This is a pretty simple class which can hold a single field of any data type named info. I created two constructors, an empty one which inits itself using the default keyword (this keyword handles both reference and value types), and one which allows the user to specify the intial value. Unity’s Input Manager Unity provides an Input Manager to help simplify the… well, the management of input – that was obvious. From the menu bar choose Edit->Project Settings->Input. Look in the inspector and you will be able to see the various mappings of input concepts to input mechanisms. Expand the Axes (if it isnt already open) and you should see several entries such as: “Horizontal”, “Fire1”, and “Jump”. There are actually several entries for most. One entry for “Horizontal” monitors keyboard input from the arrow keys or the ‘a’ and ‘d’ keys. Another entry for “Horizontal” monitors keyboard input for Joystick axis input. In your code, you can check if there is “Horizontal” input from any of those sources with a single reference to that name. Unity has done most of the heavy lifting for us, however, one of my own personal complaints with this manager (and several of their other systems) is a lack of support for events. You must check the status of Input every frame (through an Update method or Coroutine) in order to make sure you dont miss anything. As you may have guessed, this is not terribly efficient, and can be a bit cumbersome to re-implement everywhere you need input. Therefore, I will do this process only once, and then share the results via events with any other interested script. Create another subfolder of Scripts called Controller. Inside this folder create our script, InputController.cs and open it for editing. We will be using the “Horizontal” and “Vertical” inputs for a variety of things such as moving the tile selection cursor around the board (to select a move location or attack target) or to change the selected item in a UI menu. As I mentioned before, we will need to check for input on every frame, so let’s go ahead and take advantage of the Update method. Add the following code to your script: void Update () { Debug.Log(Input.GetAxis("Horizontal")); } Save your script, attach it to any gameobject in a new scene, and press play. Every frame, a new debug log will print to the console (Make sure Collapse is disabled so they appear in the correct time-wise order). Watch what happens to the value when your press the left or right arrow keys, or the ‘a’ and ‘d’ keys. Pressing right or ‘d’ causes the output to raise toward positive one, and pressing left or ‘a’ causes the output to lower toward negative one. If you aren’t pressing in either direction, the output will ease back to zero. With this function, Unity has smoothed the input for us. If I were making a game where a character could move freely through the world such as an FPS, then that easing would help movement look a little more natural. For our game, I don’t want any of the smoothing on directional input. Since we are snapping to cells on a board or between menu options, etc. a very obvious on/off tap of a button will be better for us. In this case there is another method we can try: void Update () { Debug.Log(Input.GetAxisRaw("Horizontal")); } Save the script and run the scene again. Now the keyboard presses result in jumps immediately from zero to one or negative one depending on the direction you press. You may have noticed that some games allow input both through pressing, and through holding. For example, as soon as I press an arrow key, the tile might move on the board. If I keep holding the arrow, after a short pause, the tile might continue moving at a semi-quick rate. I want to add this “repeat” functionality to our script, but since I will need it for multiple axis, it makes sense to track each one as a separate object so that we can reuse our code. I will add another class inside this script – normally I dont like to do that, but this second class is private and will only be used by our input controller, so it is an exception. class Repeater { const float threshold = 0.5f; const float rate = 0.25f; float _next; bool _hold; string _axis; public Repeater (string axisName) { _axis = axisName; } public int Update () { int retValue = 0; int value = Mathf.RoundToInt( Input.GetAxisRaw(_axis) ); if (value != 0) { if (Time.time > _next) { retValue = value; _next = Time.time + (_hold ? rate : threshold); _hold = true; } } else { _hold = false; _next = 0; } return retValue; } } At the top of the Repeater class I defined two const values. The threshold value determines the amount of pause to wait between an intial press of the button, and the point at which the input will begin repeating. The rate value determines the speed that the input will repeat. Next, I added a few private fields. I use _next to mark a target point in time which must be passed before new events will be registered – it defaults and resets to zero, so that the first press is always immediately registered. I use _hold to indicate whether or not the user has continued pressing the same button since the last time an event fired. Finally, I use _axis to store the axis that will be monitored through Unity’s Input Manager. This value is assigned via the class constructor. After the constructor, I have an Update method. Note that this class is not a MonoBehaviour, so the Update method wont be triggered by Unity – we will be calling it manually. The method returns an int value, which will either be -1, 0, or 1. Values of zero indicate that either the user is not pressing a button, or that we are waiting for a repeat event. Inside the Update method, I declare a local variable called retValue which is the value which will be returned from the function. It will only change from zero under special circumstances. Next we get the value this object is tracking from the Unity’s Input Manager using GetAxisRaw as we did earlier. I put the method inside of another method which rounds the result and casts it to an int value type. The if condition basically asks if there is user input or not. When the value field is not zero the user is providing input. Inside this body we do another if condition check which verifies that sufficient time has passed to allow an input event. On the first press of a button, Time.time will always be greater than _next which will be zero at the time. Inside of the inner condition body, we set the retValue to match the value reported by the Input Manager, and then set our time target to the current time plus an additional amount of time to wait. This means that subsequent calls into this method will not pass the inner condition check until some time in the future. Some of you may not be familiar with the conditional operator (?:) used here – it is very similar to an if condition where the condition to validate is to the left of the question mark, the value to the right of the question mark is used when the condition is true, and the value after the colon is used when the condition is false. Finally, I mark _hold as being true. The first (outer) if condition has an else clause – whenever the user is NOT providing input, this will mark our _hold value as false and reset the time for future events to zero so that they can immediately fire with the next press of the button. Now its time to put our Repeater class to good use. Add two fields inside the InputController class as follows: Repeater _hor = new Repeater("Horizontal"); Repeater _ver = new Repeater("Vertical"); Whenever our Repeaters report input, I will want to share this as an event. I will make it static so that other scripts merely need to know about this class and not its instances. We will implement this EventHandler using generics so that we can specify the type of EventArgs – we will use our InfoEventArgs and specify its type as a Point. Don’t forget that you will need to add a using statement for the System namespace in order to use the EventHandler. public static event EventHandler<InfoEventArgs<Point>> moveEvent; We will need to tie our repeaters into Unity’s Update loop, and actually fire the event we just declared at the appropriate time: void Update () { int x = _hor.Update(); int y = _ver.Update(); if (x != 0 || y != 0) { if (moveEvent != null) moveEvent(this, new InfoEventArgs<Point>(new Point(x, y))); } } Next I want to add events which watch for the various Fire button presses. I don’t need these to repeat, because I wont consider the input as complete until it is actually released. I will use one Fire button for confirmation, one for cancellation and will add a third, just in case I think of a reason to have it. The event we send for these will also use InfoEventArgs but instead of passing a Point struct for direction, it will just pass an int representing which Fire button was pressed public static event EventHandler<InfoEventArgs<int>> fireEvent; Add a string array to your class to hold the buttons you wish to check for: string[] _buttons = new string[] {"Fire1", "Fire2", "Fire3"}; In our Update loop after we check for movement, let’s add the following to loop through each of our Fire button checks: for (int i = 0; i < 3; ++i) { if (Input.GetButtonUp(_buttons[i])) { if (fireEvent != null) fireEvent(this, new InfoEventArgs<int>(i)); } } Using Our Input Controller Now that we’ve completed the Input Controller, let’s test it out. Create a temporary script somewhere in your project and add it to an object in the scene. You will also need to make sure to add the Input Controller to an object in the scene. I created a script called Demo in the root of the Scripts folder. I usually connect to events in OnEnable and disconnect from events in OnDisable. Remember that cleanup is very important – particularly when using static events, because they maintain strong references to your objects. This means they keep the objects from going out of scope and being truly destroyed, and could for example trigger events on scripts whose GameObject’s are destroyed. void OnEnable () { InputController.moveEvent += OnMoveEvent; InputController.fireEvent += OnFireEvent; } void OnDisable () { InputController.moveEvent -= OnMoveEvent; InputController.fireEvent -= OnFireEvent; } When you have added statements like this, but have not yet implemented the handler, you can have MonoDevelop auto-implement them for you with the correct signatures. Right-click on the OnMoveEvent and then choose Refactor->Create Method. A line will appear indicating where the implementation will be inserted which you can move up or down with the arrow keys, and then confirm the placement by hitting the return key. You should see something like the following: void OnMoveEvent (object sender, InfoEventArgs<Point> e) { throw new System.NotImplementedException (); } I dont want to crash the program so I will replace the throw exception statement with a simple Debug Log indicating the direction of the input. Debug.Log("Move " + e.info.ToString()); Use the same trick to implement the OnFireEvent handler and use a Debug message to indicate which button index was used. Debug.Log("Fire " + e.info); Run the scene and trigger input and watch the console to verify that everything works. Summary In this lesson we reviewed making a custom, generic subclass of EventArgs to use with our EventHandler based events. We discussed Unity’s Input Manager and how it can provide unified input across multiple devices, and then we wrapped it up with a new Controler class to listen for special input events specific to our game. In the end I showed a simple implementation that would listen to the events and take an action. 42 thoughts on “Tactics RPG User Input Controller” Love your new serie about TRPG. Keep writing 🙂 Suppose we don’t pass in a Point for the moveEvent, but instead want to handle a forward/back/left/right input based on the Unity Input Manager axes (horizontal and vertical). How would that be done? I suppose I would define an enum with the various directions you listed, and then pass a type of the enum based on the axis values. For example, if the ‘x’ value was greater than a certain threshold, I would pass along “right”, or if the ‘y’ value was greater than a certain threshold, I would pass along “up”. As an alternative to the enum, you could also pass along a KeyCode value such as “UpArrow”, see the docs: Just wondering – would declaring bools for each direction and trying to pass them in instead of an enum be bad practice? Sorry, I’m new to events and wondering why you’d choose enums. I’m also curious where I’m going wrong in trying to pass in an enum. moveEvent(this, new InfoEventArgs(MovementDirs.Left)); MovementDirs is a public enum. I wouldn’t declare a bool for each direction because it just feels wasteful. If the idea is that you want to support two directions simultaneously such as Up and Right, then you can use an enum as Flags (see my post here, if you are unfamiliar with that). When you are using the generic InfoEventArgs, you can help it understand what it is creating like this: “ new InfoEventArgs LessThan MovementDirs GreaterThan (MovementDirs.Left)” Sorry, wordpress keeps modifying my comment and stripping the generic line out. I am not sure how to display it correctly in a comment so you will have to replace the LessThan and GreaterThan bits with the appropriate character. Hey @thejon2014. Again, great tutorials. Here are some suggestions (for the newbies sake): > you may be interested in adding a note/link to the repository on every post, or maybe at the Project posts list (a different looking one, to make it obvious) > I could only know for sure where the EventHandler goes because I confirmed it at the Repository, so it might be good if you stated that more clearly. Hi, I’ve hit a bit of a road block following this tutorial and I think it has to do with my event handlers. I am getting no errors, but nothing is added to the debug log when I give input. This is my Demo.cs using System.Collections; using System.Collections.Generic; using UnityEngine; public class Demo : MonoBehaviour { void OnEnable () { Debug.Log (“This line is logged properly”);); } } Does this look like it should be working? Is the problem elsewhere? The code looks ok (but don’t forget to also unsubscribe from the fireEvent). Since you are able to see the Debug.Log where you subscribe for events but aren’t receiving anything, your next check should be that you are actually sending something. I would put some Debug.Log statements in the InputController script around your input where you expect the event to be posted. Your problem could be as simple as forgetting to have an InputController component in your scene. I noticed that you’re missing the info type in your method declarations. So OnMoveEvent parameter should be InfoEventArgs e, and OnFireEvent should be InfoEventArgs e. Hope this helps! Hi =) I’ve been so sorry for google translate but I can not speak English well. My question is: How do I get it to play the game on my mobile? As I know it is not possible that a UI button simulates a button print or am I wrong with this statement? How do I best deal with this? Can you help me or do you have an idea how I can manage this? What maybe still important is I am not really good in the coding I make it only half a year. And have given me everything myself. Thanks in advance =) Unity has a new UI system with Unity 5. Their UI Button does respond to touches for a mobile screen in the same way as it would respond to a mouse click. I have a short post that shows how to link button events to your code here: Note that the architecture is pretty different than the setup for the Tactics RPG project which was geared more toward using a game controller or keyboard so you could highlight menu options and then confirm them etc. With a touch input you don’t need a highlight state. I once tried a little, and did not come to any sly results. The only thing I could use but not 100% works is the following: Public float Move = 1.0f; Void Update () { // Up If (Input.GetKeyDown (KeyCode.U)) { Transform.Translate (0, 0, Move); } // Down If (Input.GetKeyDown (KeyCode.J)) { Etc If I pull it on the TileSelection Indicator, I can use it and could modify it using buttons. But when I drive 2 Tile’s to the left and it selects it does not go there. If I then but normally with (Key A) 1 left it goes 1 links although the Tile Selection Indicator is on 2 links. Then I remembered that you in the input controller believe the position update. Int x = _hor.Update (); Int y = _ver.Update (); If (x! = 0 || y! = 0) { If (moveEvent! = Null) MoveEvent (this, new InfoEventArgs (new point (x, y)); Could this be the reason that it does not work? And how can I rewrite or paste that it works? I thought I understood what you were asking, but now I am not sure. I thought you wanted to modify the Tactics RPG project so that it would work with a touch-screen interface on mobile rather than the keyboard input it currently uses. This response makes it sound like you are still working with keyboard input, but I think it wasn’t translated well enough for me to understand what the problem is or what your goal is. I’m sorry, as I said, I translate everything with Google Translate. You have already understood me the first time. I would like to play the game on my mobile phone. I wanted to explain to you that I made some attempts, but I do not find a working solution. With the above written code, I can indeed control the Tile Selection Indicator over the buttons. But when I move the Tile Selection Indicator with my code above, and then confirm the destination, it does not move. If I drive with the old control with keyboard 1 Tile to the left, and then still take another step with my code. If he ignores the position of the TileSelection Indicator (which in the game is now 2 on the left) and moves only 1 to the left. Perhaps I go completely wrong the problem, I hope they understand me now. I would also have a button to confirm the destination or to select the actions. Also there I have unfortunately no idea how I should do that. The same would be with the back button. In the tutorial of this project I thought step by step I understand what they are doing. But now I doubt slowly to myself. I am quite a newcomer in coding, and am in the end with my knowledge have never synonymous nor have been specially adapted for mobile phone. But there is always the first time =) Thanks for their help and excuse Ok, that helps clarify. I think your problem is that you are simply moving the tile selection indicator’s transform position. You are controlling where it appears in the scene, but the rest of the game logic also needs to be informed of the change. See the BattleController’s “pos” field. This is the data that the game references to determine what tile is actually selected. The object you moved is just a view that represents that information to the player, but it is up to you to keep them in sync. I’m not sure how much of a newcomer you are to coding, but it is worth noting that this is a pretty advanced project. Changing the input architecture may “sound” easy, but is actually a pretty involved process and will have a large impact on the flow of the states and look of the screens as well as a host of architectural challenges you’ll have to consider, like what happens if I tap two places on the screen simultaneously – this can be a much larger problem than you might think. I wouldn’t recommend attempting this sort of challenge until you have a solid understanding of programming, and fully understand the code I have already provided. Until then, I am working on another project which will be touch-screen friendly. I hope to start publishing it in a few weeks. Stay tuned because it will probably be a better starting point for you. I’ve seen I’ve written the wrong code as I’ve still tested it with the keyboard. This is the code I use with the UI button. public class TileSelectionController : MonoBehaviour { public float Move = 1.0f; public void UP () { transform.Translate(0, 0, Move); } So here I am again XD after the last error was stupid and this one must be too… well im getting this strange error on the “demo” script which i name InputHelper…. the on enable onfireevent retuns this error and does not let my enter play test mode : “Assets/Scripts/InputHelper.cs(10,19): error CS0123: A method or delegate `InputHelper.OnFireEvent(object, InfoEventArgs)’ parameters do not match delegate `System.EventHandler<InfoEventArgs>(object, InfoEventArgs)’ parameters” My code is the following : using System.Collections; using System.Collections.Generic; using UnityEngine; public class InputHelper : MonoBehaviour { void OnEnable () {); } } If i remove the inputcontroller.fireevent += onfireevent from the onenable method it lets me play test but only the move event works… Unfortunately wordpress always modifies the code in the comments so I can’t verify that you had the correct generic type applied to the InfoEventArgs in the OnFireEvent handler. Make sure that it is “int” instead of “Point”, otherwise it looks correct at my first glance. If necessary, copy a fire event handler from somewhere else in the code that is working and that should help you understand what you missed. Hi Jon, First of all, great tutorial, I really like the way you describe your thought process during decisions, so it’s easy to understand why it’s better to do it this way than the other way. Much better than the video tutorials where they just give you the fastest path, although many times the wrong one for a full project. So, my question is in the Update method for the Repeater class. Why do you use Mathf.RoundToInt instead of a simple cast to int? Since Input.GetAxisRaw will always be 0, 1, or -1, I didn’t understand the reason for that. Is it just a matter of preference? Thanks! Glad you are enjoying it! To answer your question, although the project lends itself to keyboard input, the Input class can accept joystick input as well. In that case, the rounding will help pick the value nearest to the intent of the user. Oh yeah, that makes sense. Thanks! extremely quick for me on Safari. Outstanding Blog! Hi there, I enjoy reading all of your post. I like to write a little comment to support you. Hey I’m having real issues with passing the moveEvent as a Point. Apparently Visual Studio can’t find the Point type. I am using the System namespace. After googling the issue I tried System.Drawing, but apparently that doesn’t exist so…I’m lost. Any advice? The “Point” I am using in this code is not provided by System. We create it in the board generator post. Also, if you ever get stuck you can refer to the completed project and compare Hi, this is a great tutorial, very challenging and professionally made. I’m learning a lot from it! I was wondering if you have made a more recent tutorial somewhere, could be with another game, with an updated version of the Unity Input controller. I did some research and apparently they fixed the issue about having to put your stuff in the update() method. Thanks! Glad you’re enjoying it and learning! I just looked at the updated Unity Documentation (2018.2) for Input and they still show it implemented with a polling style of architecture (via the update method). If however, you are referring to their EventSystem which they use with their new UI, then yes, I do also have projects that make use of that. Feel free to check out either the “Unofficial Pokemon Board Game” or “Collectible Card Game” tutorials from my Project page. Thanks very much, yes I was referring to their event system, will check that out! A few people in the comments seems to be removing InputController.fireEvent -= OnFireEvent; from OnDisable, but it seems like the example provided by you actually still has it included. Is there any reason to remove this? I kept it in my code and it seems to be working fine. Just curious more than anything. It is wrong to forget to unsubscribe from events. Events maintain a strong reference to objects and therefore could keep something alive when you don’t intend it to be. In other words it could cause your program to crash, or worse, keep running but execute code more times than you intended and get into an unexpected state. It occurred to me I never thanked you for this though I initially meant to. Just to clarify, that means I was right to keep the code? That snippet on the on disable is the unsubscribe events code? Either way, Thanks for this tutorial. It’s the perfect balance of me wanting to know every bit of putting a tactics rpg game together that I can, with the actuality of being able to create something worth showing in a reasonable time. This has really kept me motivated (and hopefully for me, I’ll be able to stay motivated.) Yes, you were correct to keep the code, and you’re welcome. It always feels nice to know that your work is appreciated and that you are inspiring others! Great Ьlog you have here.. It?s harԁ to find good quality writing like yours these days. I honestly appreciate indiviɗuals like you! Take care!! I have been trying to change this to an on screen ui button for the inputs, but can not figure it out. It seems I need to set the horizontal value manual for it to work. Or am I missing how to do this? If I understand what you are saying, then I think you are taking the correct approach. For example, connect a button event for an up button to an “upButtonPressed” method on your Input Controller, and then use that method to fire the appropriate notification (the same as you would had the up arrow keyboard key been pressed). I think I got it to work. I simple set the x or y directly, and also pass that into the repeater update function. The movement works fine, however the repeater no longer seems to work correctly. Hmmm Also, when I tried to directly fire a moveEvent from a function and gave it the values directly, it seemed to not work. (Since the horizontal and vertical raw never changes I think.) If you want to post your code in the Forum I’d be happy to take a look. The repeater needs to check for a held key each frame. The UI button only sends a single event (probably on the first frame it detects the mouse down). If you want a UI button that acts like a keyboard key, you will probably need to create your own and that goes outside the scope of this tutorial. I’d be happy to help you further in the Forums if you want. Thank you for your further offer of help! I actually figured out what I was doing wrong though, and it is a beginner mistake for sure. For anyone else who tries for uibutton controls, Just make sure you manipulate the input from within the update function……am I ever blind lol. Now repeaters are working as expected and now I am adding in camera zoom as well since I have a better understanding of the input control system now.
http://theliquidfire.com/2015/05/24/user-input-controller/
CC-MAIN-2020-24
en
refinedweb
> should the function be expanded to calculate for negative > n or is the function expected to work only in combination sense? If this were my design, I would offer both but in separate functions: def comb(n, k): if n < 0: raise ValueError return bincoeff(n, k) def bincoeff(n, k): if n < 0: return (-1)**k * bincoeff(n+k+1, k) else: # implementation here... I believe we agreed earlier that supporting non-integers was not necessary. Are you also providing a perm(n, k) function?
https://bugs.python.org/msg332957
CC-MAIN-2020-24
en
refinedweb
VASP 2.0¶ Introduction¶ This module introduces an updated version of the ASE VASP calculator, which adds the functionality of the Calculator.='vasp.out', **kwargs)[source]¶ ASE interface for the Vienna Ab initio Simulation Package (VASP), with the Calculator, output stream will be supressed If txt is ‘-‘ the output will be sent through stdout If txt is a string a file will be opened, and the output will be sent to that file. Finally, txt can also be a an output stream, which has a ‘write’ attribute. Default is ‘vasp.out’ Examples: >>> Vasp2(label='mylabel', txt='vasp.out') # Redirect stdout >>> Vasp2(txt='myfile.txt') # Redirect stdout >>> Vasp2(txt='-') # Print vasp output to stdout >>> Vasp2(txt=None) # Suppress txt output - command: str Custom instructions on how to execute VASP. Has priority over environment variables. Basic calculator implementation. - restart: str Prefix for restart file. May contain a directory. Default is None: don’t restart. - ignore_bad_restart_file: bool Ignore broken or missing restart file. By default, it is an error if the restart file is missing or broken. - directory: str or PurePath Working directory in which to read and write files and perform calculations. - label: str Name used for all files. Not supported by all calculators. May contain a directory, but please use the directory parameter for that instead. - atoms: Atoms object Optional Atoms object to which the calculator will be attached. When restarting, atoms will get its positions and unit-cell updated from file. Note Parameters can be changed after the calculator has been constructed by using the set() method: >>> calc.set(prec='Accurate', ediff=1E-5) This would set the precision to Accurate and the break condition for the electronic SC-loop to 1E-5 eV. Storing the calculator state¶ The results from the Vasp2 calculator can exported as a dictionary, which can then be saved in a JSON format, which enables easy and compressed sharing and storing of the input & outputs of a VASP calculation. The following methods of Vasp2 can be used for this purpose: Vasp2. asdict()[source]¶ Return a dictionary representation of the calculator state. Does NOT contain information on the command, txtor directorykeywords. Contains the following keys: ase_version vasp_version inputs results atoms(Only if the calculator has an Atomsobject) Vasp2. fromdict(dct)[source]¶ Restore calculator from a asdict()dictionary. Parameters: - dct: Dictionary The dictionary which is used to restore the calculator state. Vasp2. write_json(filename)[source]¶ Dump calculator state to JSON file. Parameters: - filename: string The filename which the JSON file will be stored to. Prepends the directorypath to the filename. First we can dump the state of the calculation using the write_json() method: # After a calculation calc.write_json('mystate.json') # This is equivalent to from ase.io import jsonio dct = calc.asdict() # Get the calculator in a dictionary format jsonio.write_json('mystate.json', dct) At a later stage, that file can be used to restore a the input and (simple) output parameters of a calculation, without the need to copy around all the VASP specific files, using either the ase.io.jsonio.read_json() function or the Vasp2 fromdict() method. calc = Vasp2() calc.read_json('mystate.json') atoms = calc.get_atoms() # Get the atoms object # This is equivalent to from ase.calculators.vasp import Vasp2 from ase.io import jsonio dct = jsonio.read_json('mystate.json') # Load exported dict object from the JSON file calc = Vasp2() calc.fromdict(dct) atoms = calc.get_atoms() # Get the atoms object The dictionary object, which is created from the todict() method, also contains information about the ASE and VASP version which was used at the time of the calculation, through the ase_version and vasp_version keys. import json with open('mystate.json', 'r') as f: dct = json.load(f) print('ASE version: {}, VASP version: {}'.format(dct['ase_version'], dct['vasp_version'])) Note The ASE calculator contains no information about the wavefunctions or charge densities, so these are NOT stored in the dictionary or JSON file, and therefore results may vary on a restarted calculation. Examples¶ The Vasp 2.calc =, show=True) #.
https://wiki.fysik.dtu.dk/ase/dev/ase/calculators/vasp2.html
CC-MAIN-2020-24
en
refinedweb
Server Controls Server controls are specifically designed to work with Web Forms. There are two types of server controls: HTML controls and Web controls.We review them in the following sections. HTML Controls The first set of server controls is HTML controls. Each HTML control has a one-to-one mapping with an HTML tag and, therefore, represents an HTML control you’ve probably been using. These controls reside in the System.Web.UI.HtmlControls namespace and derive either directly or indirectly from the HtmlControl base class. The controls and their corresponding HTML tags are given in Table 4.2. Get Inside ASP.NET now with O’Reilly online learning. O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.
https://www.oreilly.com/library/view/inside-aspnet/0735711356/0735711356_ch04lev1sec4.html
CC-MAIN-2020-24
en
refinedweb
We are seeing more and more users doing rather advanced scripting in their soapUI Projects, for example Since groovy scripts in soapUI have full access to the soapUI object model, anything is possible. Unfortunately though the API documentation is only available in javadoc format and can be hard for new users to grasp, especially if they lack prior knowledge of java or groovy. I'll walk you through the basics of the soapUI API, hopefully giving you a foundation for further exploration. soapUI has extensive support for spicing up your projects with scripts written in the groovy language. As you can see, the possibilities for customizing the behavior of your soapUI testing projects are quite extensive. A note on the choice of groovy, when we started to provide scripting possibilities in 2006, groovy was the natural language of choice. Now this has changed and there are many viable alternatives to groovy via th Java 1.6 scripting API (JRuby, Jython, Javascript, etc), but since we have made our own optimizations for the Groovy ClassLoaders which would not be available for these other languages, we have opted to stick to groovy instead of providing "sub-optimal" support for other languages. In soapUI all project-related artifacts (Projects, Requests, TestSuites, etc) are ModelItems, their interfaces are all defined in the com.eviware.soapui.model package and sub-packages (for example have a look at the com.eviware.soapui.model.iface package for Interface/Operation/Request related classes). A modelItems' name, description, icon, etc. can all be accessed through the corresponding getters, for example log.info project.name would print the name of the project variable. Obviously depending on which type of ModelItem, properties and methods for accessing children are available. The general model for accessing children of a certain type to a ModelItem is as follows (XX = the type of child): int getXXCount() XX getXXByName( String name ) XX getXXAt( int index ) List getXXList() Map getXXs() For example to get a certain MockService in a project you could use one of def mockService = project.getMockServiceByName( "My MockService" ) def mockService = project.getMockServiceAt( 0 ) for iterating all LoadTests in a TestCase you could for( loadTest in testCase.loadTestList ) log.info loadTest.name Since groovy simplifies map access the last can be used in several ways from a script, for example if we have a TestSuite and want to access its TestCases we can do both testSuites.testCases["..."] and testSuites.testCases."..." Parent objects are generally available through the name of their type, ie log.info( testCase.testSuite.name + " in project " + testCase.testSuite.project.name ) Navigates "upward" in the object model using the testSuite and project properties. You will often want to manipulate properties within your scripts, either those that are built on or those that are custom properties, the later can be set on the following objects in soapUI, Projects, TestSuites, TestCases, MockServices and the PropertiesTestStep (these all inherit from MutableTestPropertyHolder). Setting/getting properties is straightforward // set property value object.setPropertyValue( "name", "value" ) object.properties["name"].value = "value" // get property value log.info object.getPropertValue( "name" ) log.info object.properties["name"].value log.info object.properties."name".value When scripting inside some kind of "run", there is always a context variable available for getting/setting context-specific variables. The contexts are: All these inherit from the PropertyExpansionContext interface which has methods for setting/getting properties and an expand method which can be used to expand arbitrary strings containing Property-Expansions, read more on this in the soapUI User-Guide. As you've noticed, there is a "log" variable in all scipts. This is a standard log4j Logger which appends to the groovy log tab at the bottom of the soapUI window and can be used for diagnostic purposes, etc.
https://www.soapui.org/scripting---properties/the-soapui-object-model/
CC-MAIN-2020-24
en
refinedweb
Version 1.23.0 For an overview of this library, along with tutorials and examples, see CodeQL for JavaScript . A call with inter-procedural type inference for the return value. import javascript Gets a called function. INTERNAL: Do not use. Holds if this data flow node accesses the global variable g, either directly or through the window object. g window Gets type inference results for this data flow node. Gets the expression corresponding to this data flow node, if any. Gets a Boolean value that this node evaluates to. primitive type to which the value of this node can be coerced. Gets a data flow node to which data may flow from this node in one local step. Gets a type inferred for this node. Gets an abstract value that this node may evaluate to at runtime. unique Boolean value that this node evaluates to, if any. Gets the unique type inferred for this node, if any. Gets the toplevel in which this node occurs. Holds if the flow analysis can infer at least one abstract value for this node.. Gets another data flow node whose value flows into this node in one local step (that is, not involving global variables). Holds if this node may evaluate to the Boolean value b. b Holds if this node may evaluate to the string s, possibly through local data flow. s Holds if this expression may refer to the initial value of parameter p. p Gets a pretty-printed representation of all types inferred for this node as a comma-separated list, with the last comma being spelled “or”. Gets a textual representation of this element.
https://help.semmle.com/qldoc/javascript/semmle/javascript/dataflow/internal/InterProceduralTypeInference.qll/type.InterProceduralTypeInference$CallWithAnalyzedReturnFlow.html
CC-MAIN-2020-24
en
refinedweb
ISHPullUp A vertical split view controller with a pull up gesture as seen in the iOS 10 Maps app. ISHPullUp provides a simple UIViewControlller subclass with two child controllers. The layout can be managed entirely via delegation and is easy to use with autolayout. View subclasses are provided to make beautiful iOS 10 style designs easier. ISHPullUpHandleView provides a drag handle as seen in the notification center or Maps app with three states: up, neutral, down. ISHPullUpRoundedView provides the perfect backing view for your bottom view controller with a hairline border and rounded top corners. Combine it with ISHHoverBar to create a UI resembling the iOS 10 Maps app. Basic usage To use the framework create an instance of ISHPullUpViewController and set the contentViewController and bottomViewController properties to your own view controller instances. That’s it, everything else is optional. Implement the appropriate delegates to fine-tune the behaviour. While all delegates are optional, the methods of all delegates themselves are mandatory. You can customize a lot more via the ISHPullUpViewController properties: - When and if the bottom should snap to the collapsed and expanded positions - At what threshold the content should be dimmed using which color - If the bottom view controller should be shifted or resized Controlling the height using the sizingDelegate The height of the bottomViewController is controlled by the sizingDelegate. All heights are cached and updated regularly. You can use auto layout to calculate minimum and maximum heights (e.g. via systemLayoutSizeFitting(UILayoutFittingCompressedSize)). You can customize: - The minimum height (collapsed): ...minimumHeightForBottomViewController... - The maximum height (expanded): ...maximumHeightForBottomViewController... - Intermediate snap positions: ...targetHeightForBottomViewController... As an example, the minimum height is determined by the following delegate callback: // Swift func pullUpViewController(_ pullUpViewController: ISHPullUpViewController, minimumHeightForBottomViewController bottomVC: UIViewController) -> CGFloat // ObjC - (CGFloat)pullUpViewController:(ISHPullUpViewController *)pullUpViewController minimumHeightForBottomViewController:(UIViewController *)bottomVC; The sizing delegate also provides a callback to adjust the layout of the bottomViewController: ...updateEdgeInsets:forBottomViewController:. Adjusting the layout of the contentViewController using the contentDelegate The view of the contentViewController fills the entire view and is partly overlaid by the view of the bottomViewController. In addition the area covered by the bottom view can change. To adjust your layout accordingly you may set the contentDelegate. Additionally, we suggest that your content view controller uses a dedicated view as the first child to its own root view that provides layout margins for the rest of the layout. The typical implementation of the content delegate would then look like this: // Swift func pullUpViewController(_ vc: ISHPullUpViewController, update edgeInsets: UIEdgeInsets, forContentViewController _: UIViewController) { layoutView.layoutMargins = edgeInsets // call layoutIfNeeded right away to participate in animations // this method may be called from within animation blocks layoutView.layoutIfNeeded() } // ObjC - (void)pullUpViewController:(ISHPullUpViewController *)pullUpViewController updateEdgeInsets:(UIEdgeInsets)edgeInsets forContentViewController:(UIViewController *)contentVC { self.layoutView.layoutMargins = edgeInsets; // call layoutIfNeeded right away to participate in animations // this method may be called from within animation blocks [self.layoutView layoutIfNeeded]; } Reacting to changes in state The ISHPullUpViewController has four states: collapsed, dragging, intermediate, expanded. You can react to state changes (e.g. to update the state of a ISHPullUpHandleView) by setting the stateDelegate and implementing its only method. View subclasses ISHPullUpRoundedView A view subclass providing corner radius for the top edges and shadow. The shadow is only applied outside of the view content allowing for transparency. When using this subclass as the primary view for the bottom view controller the dimming (using ISHPullUpDimmingView) is automatically adjusted for the top edges’ rounded corners. You can configure most properties via Interface Builder or code including: - Shadow: opacity, offset, color, and radius - Stroke: color and line width - Corner radius The ISHPullUpRoundedVisualEffectView subclass uses a UIVisualEffectView as a background. ISHPullUpHandleView The ISHPullUpHandleView can be used anywhere in your view hierarchy to show a drag handle. It provides three states (up, neutral, and down) and can animate between the states. Changing the state is up to the implementation allowing you to either match the state of the pullup view controller or to always display it in a neutral state. The frame should not be set explicitly. You should rather use the intrinsic content size and rely on auto layout. In Interface Builder wait for the framework to be compiled once for the intrinsicContentSize to be correctly applied to your XIB. You can configure most aspects via Interface Builder or code including: - Size of the arrow - Stroke: color and width - State (via code only) General info The framework is written in Objective-C to allow easy integration into any iOS project and has fully documented headers. All classes are annotated for easy integration into Swift code bases. The sample app is written in Swift 3 and thus requires Xcode 8 to compile. The framework and sample app have a Deployment Target of iOS 8. Integration into your project Dynamically-Linked Framework Add the project file ISHPullUp.xcodeproj as a subproject of your app. Then add the framework ISHPullUp.framework to the app’s embedded binaries (on the General tab of your app target’s settings). On the Build Phases tab, verify that the framework has also been added to the Link Binary with Libraries phase, and that an Embed Frameworks phase has been created (unless it existed before). The framework can be used as a module, so you can use @import ISHPullUp; (Objective-C) and import ISHPullUp (Swift) to import all public headers. Further reading on Modules: Clang Documentation Include files directly Currently the project relies on 3 implementation files and their headers. You can include them directly into your project: ISHPullUp/ISHPullUpHandleView.h{ h/ m} ISHPullUp/ISHPullUpRoundedView.{ h/ m} ISHPullUp/ISHPullUpViewController.{ h/ m} CocoaPods You can use CocoaPods to install ISHPullUp as a static library. Add this to your Podfile: target 'MyApp' do pod 'ISHPullUp' end ISHPullUp can also be installed as a framework: target 'MyApp' do use_frameworks! pod 'ISHPullUp' end See the official website to get started with CocoaPods. Carthage Since ISHPullUp can be built as a framework, it supports Carthage, too. Add this to your Cartfile: github iosphere/ISHPullUp See the Carthage repository to get started with Carthage. More OpenSource projects by iosphere ISHPermissionKit – A polite and unified way of asking for permission on iOS ISHHoverBar – A floating UIToolBar replacement as seen in the iOS 10 Maps app, supporting both vertical and horizontal orientation TODO - [ ] Add modal presentation mode for a bottom view controller Latest podspec { "name": "ISHPullUp", "version": "1.0.7", "summary": "Vertical split view controller with pull up gesture as seen in the iOS 10 Maps and Music app", "description": "ISHPullUp provides a simple UIViewControlller subclass with two child controllers. The layout can be managed entirely via delegation and is easy to use with autolayout. A pan gesture allows the user to drag the bottom view controller up or down. nnView subclasses are provided to make beautiful iOS 10 style designs easier. ISHPullUpHandleView provides a drag handle as seen in the notification center or Maps app with three states: up, neutral, down. ISHPullUpRoundedView (and ISHPullUpRoundedVisualEffectView) provides the perfect backing view for your bottom view controller with a hairline border, rounded top corners,u00a0and a shadow.", "homepage": "", "screenshots": "", "license": { "type": "MIT", "file": "LICENSE" }, "authors": { "Felix Lamouroux": "[email protected]" }, "source": { "git": "", "tag": "1.0.7" }, "social_media_url": "", "platforms": { "ios": "8.0" }, "source_files": "ISHPullUp/*.{h,m}", "frameworks": "UIKit" } Wed, 28 Feb 2018 07:00:04 +0000
https://tryexcept.com/articles/cocoapod/ishpullup
CC-MAIN-2020-24
en
refinedweb
There are few methods to convert a list to string in python 1. Using join () function It is basically joining function, which is used to join the two or more strings in python or to concatenate a list of characters can be joined using the join function. For example stng = "" stng1 = ("gaon", "gore", "Gaon") print (stng.join(stng1)) Output: gaongoreGaon ''.join Convert integer To convert different type of data to a string use str function. mlist =[1,2,3] print''.join(str(e) for e in mlist) Output: 123 Specifying any type of Delimiters Till now we have used space as a delimiter, but you can use any type of elements to separate in new string. Here we are using '-' (hyphen). mlist =['1','2','3'] print'-'.join(str(e) for e in mlist) Output: 1-2-3 Specify to Some Extent of Range In some of the cases we are required not to change the whole string but half of string or to some extent, in that case, we specify the range to be changed Here we are using Range of two elements. mlist =['1','2','3','4'] print'-'.join(str(e) for e in mlist[:2]) Output: 1-2 Join(list) - String Method Using “”.join(list):- it takes a list and join in a string. It is called string method. Note: If the list contains strings then it joins them. Example: m_lst = ["Mumbai ", "is ", "a city"] print "".join(m_lst) ## python 3 does not support it Output Mumbai is a city Using ''.join(map()) In case of number list another function can map() also can be used to convert into string and join. m_lst = [90, 88, 65, 64] print "".join(map(str, m_lst)) ## python 3 does not support it Output will be 90 88 65 64 2. Traversal of list Function A list of characters initialize at starting, and traverse to all characters in the list, indexed and all collected to frame a string. As the traverse is completed the string is printed. Examples: # program to convert a list of charcater to a string def convert(characters): # initialization of string to "" new_char = "" # traverse in the string for x in characters: new_char += x # return string return new_char # driver code characters = ['I', 'n', 'd', 'i', 'a ', ' i', 's', ' a ', 'g', 'r', 'e', 'a', 't'] print(convert(characters)) Output of the program will be India is a great
https://www.stechies.com/convert-list-string-python/
CC-MAIN-2020-24
en
refinedweb
Represents a class that is responsible for dynamically building a controller. public class ControllerBuilder type ControllerBuilder = class Public Class ControllerBuilder Initializes a new instance of the ControllerBuilder class. Gets the current controller builder object. Gets the default namespaces. Gets the associated controller factory. Sets the specified controller factory. Sets the controller factory by using the specified type. Thank you.
https://docs.microsoft.com/en-us/dotnet/api/system.web.mvc.controllerbuilder?redirectedfrom=MSDN&view=aspnet-mvc-5.2
CC-MAIN-2020-24
en
refinedweb
the map along with many additional functionalities that makes it easier for working with layers. We have also added managers properties to the map for controls, events, the maps image sprite, HTML markers, and data sources. Similarly, we have also added the option to set many of the default map SDK settings directly into the root atlas namespace of the SDK. By doing this, all map instances that are created on a page and any service clients created, will use these settings as defaults. Previously, every map instance and service client added to your application required passing in your Azure Maps key during initialization of that feature. This meant that you would have to set this same setting multiple times in your application which can become cumbersome. By setting your key in the root atlas namespace you only need to specify it once in your application, making it easy to update in the future if needed. atlas.setSubscriptionKey('Your Azure Maps Key'); Several other settings such as language and user region information can also be set as defaults in the root atlas namespace. It is important to note that these are default values which will be used if the equivalent option isn’t specified when initializing a map instance or service client. If, for example, you pass in a different language when initializing the map, it will use that one instead. This is useful in scenarios where you may want to show multiple maps on a page in different languages. All of these developer focused improvements make it much easier to develop applications with Azure Maps while also increasing performance and reducing the amount of code required to write your application. New data source and layering model Previously, the map control only provided the ability to add vector spatial data that was in GeoJSON format to the map. This data was added to a layer which created and managed its own data source behind the scenes. Modifying any data in a layer required overwriting the existing layer which is inefficient and requires code that is unintuitive as you can see in the sample code below. var myPins = [/* Array filled with pin data */]; //Add pins to map. map.addPins(pins, { name: 'MyPinLayer' }); //Create a new pin to add to layer. var pin = new atlas.data.Feature(new atlas.data.Point(lon, lat)); //Add the pin to array of pins. myPins.push(pin); //Update the layer by overwriting all data in the layer. This is unintuitive and creates a performance hit. map.addPins(pins, { name: 'MyPinLayer', overwrite: true }); With this release we have separated the data source from the layer, which provides several benefits, such as the ability to render a single data source using multiple layers while only maintaining a single instance of the data source, thus reducing memory usage, improving performance, and creating an API that is a lot easier to understand as you can see); //Add pins to the data source. dataSource.add([/* Array filled with pin data */]); //Create a new pin to add to map. var pin = new atlas.data.Feature(atlas.data.Point([lon, lat])); //Add the pin to the data source, the map automatically updates in the most efficient manner possible. dataSource.add(pin); In addition to having DataSource class for GeoJSON formatted data, we have also added support for vector tile services via a new VectorTileSource class. These data sources can be attached to the following layers which define how the data is rendered on the map. - Bubble Layer – Render point data as scaled circles using a pixel radius. - Line Layer – Render lines and polygon outlines. - Polygon Layer – Render the filled area of polygons. - Symbol Layer – Render point data as icons and text. There is also a TileLayer class which allows you to superimpose raster tiled images on top of the map. Rather than attaching this layer to a data source, the tile service information is specified as options of the layer. While creating this new data source and layering model in the SDK we also more than doubled the functional features and rendering options for visualizing data on the map. Connect multiple layers to a data source As mentioned previously, you can now attach multiple layers to the same data source. This may sound odd, but there are many different scenarios where this becomes useful. Take for example the scenario of creating a polygon drawing experience. When letting a user draw a polygon we should render the fill polygon area as the user is adding points to the map. Adding a styled line that outlines the polygon will make it easier see the edges of the polygon as it is being drawn. Finally adding some sort of handle, such as a pin or marker, above each position in the polygon would make it easier to edit each individual position. Here is an image that demonstrates this scenario. To accomplish this in most mapping platforms you would need to create a polygon object, a line object, and pin for each position in the polygon. As the polygon is modified, you would need to manually update the line and pins. The work required to do this becomes complex very quickly. With Azure Maps all you need is a single polygon in a data source as shown in the code below. //Create a data source and add it to the map. var dataSource = new atlas.source.DataSource(); map.sources.add(dataSource); //Create a polygon and add it to the data source. dataSource.add(new atlas.data.Polygon([[[/* Coordinates for polygon */]]])); //Create a polygon layer to render the filled in area of the polygon. var polygonLayer = new atlas.layer.PolygonLayer(dataSource, 'myPolygonLayer', { fillColor: 'rgba(255,165,0,0.2)' }); //Create a line layer for greater control of rendering the outline of the polygon. var lineLayer = new atlas.layer.LineLayer(dataSource, 'myLineLayer', { color: 'orange', width: 2 }); //Create a bubble layer to render the vertices of the polygon as scaled circles. var bubbleLayer = new atlas.layer.BubbleLayer(dataSource, 'myBubbleLayer', { color: 'orange', radius: 5, outlineColor: 'white', outlineWidth: 2 }); //Add all layers to the map. map.layers.add([polygonLayer, lineLayer, bubbleLayer]); See live examples. New easy to manage Shape class All vector-based data in the Azure Maps Web SDK consist of GeoJSON objects which at the end of the day is just JSON objects that follow a defined schema. One limitation with using GeoJSON data is that if you modify the data, the map isn’t aware of the change and until you remove and replace the object in the map. To make things easier and more intuitive we have added a new Shape class which can wrap any GeoJSON feature or geometry. This class provides several functions that make it easy to update GeoJSON data and have those changes instantly reflected in the data source the Shape was added to. We found this to be so useful we automatically wrap all GeoJSON objects added to the DataSource class. Take for example the scenario where you want to update the position of a data point on the map. Previously you had to manage the data in the layer separately and then overwrite the layer as show in the code below. //Create a pin from a point feature. var pin = new atlas.data.Feature(new atlas.data.Point([-110, 45])); //Add a pin to the map. map.addPins([pin], { name: 'MyPinLayer' }); //Update pins coordinates... Map does not update. pin.geometry.coordinates = [-120, 30]; //Overwrite all pins in the layer to update the map. map.addPins([pin], { name: 'MyPinLayer', overwrite: true }); This is unintuitive and a lot more work than it should be. By wrapping the data point with the Shape class, it only takes one line of code to update the position of the data point on the map as show in the code); //Create a pin and wrap with the shape class and add to data source. var pin = new atlas.Shape(new atlas.data.Point([-110, 45])); dataSource.add(pin); //Update the coordinates of the pin, map automatically updates. pin.setCoordinates([-120, 30]); Tip: You can easily retrieve the shape wrapped version of your data from the data source rather than wrapping each object individually. Data driven styling of layers A key new feature in this update is the new data-driven styling capabilities with property functions. This allows you to add business logic to individual styling options which take into consideration the properties defined on each individual shape in the attached data source. The zoom level can also be taken into consideration when the layer is being rendered. Data-driven styles can greatly reduce the amount of code you would normally need to write and define this type of business logic using if-statements and monitoring map events. As an example, take into consideration earthquake data. Each data point has a magnitude property. To show the related magnitude of each data point on a map we might want to draw scaled circles using the BubbleLayer when the larger the magnitude of a data point, the larger the radius of the circle. The following code demonstrates how to apply data-driven style to the radius option in the BubbleLayer, which will scale the radius based on the magnitude property of each data point on a linear scale from 2 pixels, magnitude of 0 to 40 pixels, and the magnitude is 8. var earthquakeLayer = new atlas.layer.BubbleLayer(dataSource, null, { radius: ['interpolate', ['linear'], ['get', 'magnitude'], 0, 2, 8, 40 ] }); We could also apply a similar data driven style which defines the color of each circle and generate a map that looks like the following. See live example. Spatial math library A new spatial math library has been added to the atlas.math namespace which provides a collection of useful calculations that are commonly needed in many map applications. Some of the functionality provides the ability to calculate: - Straight line distance between positions. - The length of a line or path. - The heading between positions. - Distance conversions. - Cardinal splines, which allow nice smooth curved paths to be calculated between a set of points. - Geodesic paths, which is the direct path between two points taking into consideration the curvature of the earth. - Intermediate positions along a path. See live example. This library provides many common simple spatial calculations. If you require more advance spatial calculations such as geometry unions or intersections, you may find the open source Turf.js library useful. Turf.js is designed to work directly with GeoJSON data which is base format for all vector data in Azure Maps, making it easy to use with Azure Maps. Support for geospatially accurate circles GeoJSON schema does not provide a standardized way to define a geospatially accurate circle. For this reason the Azure Maps team has standardized a common way to define a geospatially accurate circle in GeoJSON without breaking the schema as shown in our documentation. You can define GeoJSON objects using pure JSON in the Azure Maps web control or using the helper classes in the atlas.data namespace. Here is an example of how to define a circle with a 1000-meter radius over Seattle. Using pure JSON var circle = { "type": "Feature", "geometry": { "type": "Point", "coordinates": [-122.33, 47.6] }, "properties": { "subType": "Circle", "radius": 1000 } }; Using helper classes in the atlas.data namespace var circle = new atlas.data.Feature(new atlas.data.Point([-122.33, 47.6]), { subType: "Circle", radius: 1000 }); When rendering these circles, the Azure Maps web control converts this Point feature into a circular Polygon which can be used with many of the different rendering layers. Here is a map where the circle is rendered as a filled polygon. See live example. One key difference between geospatially accurate circles and circles generated by the BubbleLayer is that the bubble layer assigns a pixel radius for each bubble. As the user zooms the map, the pixel radius doesn’t change, thus the map area covered by the bubble does. A geographically accurate circle has its vertices bounded to coordinates on the map. As the map is zoomed, the circle scales and maintains the area it covers. It is important to note that these circles may not always appear to be circular due to the Mercator projection used by the map. In fact, the closer the circle is to the North or South pole, the larger and more elliptical it may appear, however, the area it represents on a globe is circular. The following map shows two circles that have a radius of 750KM (750,000 meters). One circle is rendered over Greenland which is close to the North pole, while the other is rendered over Brazil, close to the equator. See live example. Backwards compatibility If you have already developed an app with Azure Maps you might be asking yourself if this means you have to rewrite your whole application. The answer is no. We have worked hard to maintain backwards compatibility. We have marked many of the old functions as deprecated in our documentation to prevent developers from using these in future applications, but will continue to support these features as they are in version 1 of the SDK. All this being said, we have come across a few applications which have skipped an important step when using the map control. When an instance of the map control is created it needs to load several resources such as a Web-GL canvas. This happens fairly quickly, but occurs asynchronously which means that the next line of code after creating the map instance can potentially be called before the map has finished loading. If that line of code tries to interact with the map before it is loaded, an error can occur. To resolve this, the “load” event should be attached to the map and functionality that needs to run after the map has loaded should be added into the callback of the event. Currently if you aren’t using the map’s load event, your application may work fine most of the time, but on another user’s device it may not. Here is some code that shows the issue and how to resolve it. Issue //Initialize a map instance. var map = new atlas.Map('myMap', { 'subscription-key': 'Your Azure Maps Key' }); //Additional code that interacts with the map. The map may not be finished loading yet. Resolution using older API interface (still supported) //Initialize a map instance. var map = new atlas.Map('myMap', { 'subscription-key': 'Your Azure Maps Key' }); //Wait until the map resources have fully loaded. map.addEventListener("load", function (e) { //Add your additional code that interacts with the map here. }); Resolution using the new API interface //Add your Azure Maps subscription key to the map SDK. atlas.setSubscriptionKey('Your Azure Maps Key'); //Initialize a map instance. var map = new atlas.Map('myMap'); //Wait until the map resources have fully loaded. map.events.add('load', function (e) { //Add your additional code that interacts with the map here. }); We want to hear from you! We are always working to grow and improve the Azure Maps platform and want to hear from you. - Have a feature request? Add it or vote up the request on our Feedback site. - Found a map data issue? Send it directly to our data provider using TomTom’s Map Share Reporter tool. - Having an issue getting your code to work? Have a topic you would like us to cover on the Azure blog? Ask us on the Azure Maps forums. We’re here to help and want to make sure you get the most out of the Azure Maps platform. - Looking for code samples or wrote a great one you want to share? Join us on GitHub.
https://azure.microsoft.com/en-gb/blog/data-driven-styling-and-more-in-the-latest-azure-maps-web-sdk-update/
CC-MAIN-2020-24
en
refinedweb
[001] Create WPF Projects Like a PRO erkn ・2 min read TL;DR; Read only bold ones you fat. I bet, you didn't know you can do like this. [1] A purpose to open a WPF project [2] JetBrains Rider (No Tonny, Visual Studio is bad for your health.) [3] Open Rider ♥ [4] Click New Solution [5] Select .Net Core (But sir I don't want dotnet core, I want my project in normal regular NetFramework X.Y bla bla bla...) Just select the fucking DotNet Core Desktop Application I will get there alright. [6] Check Create .git and thank me now. [7] Finally, hit the Create button (Do not forget to give a name to your $0.0F project.) [8] Open the [projectname].csproj file. We have work to do. [9] Change <TargetFramework> to <TargetFrameworks> for supporting more customer and earn more money and do more writing of course. [10] Since you don't want to write with .Net Core remove netcoreapp3.1 and write net48 [11] For now, you should be able to Build and Run your project and see the worthless Window. Well done! You might want to add multiple projects for your Business or Core or BLL for your garbage code. [12] Guess next step. When you are creating new project you will always use .NetCore alrigt. To understand the reasion you can create a regular project and compare *.csproj files. [13] Go create a .NetCore ClassLibrary and repeat [8], [9] and [10] [14] When you are create a class library on Rider, it does not automatically adds <UseWPF>true</UseWPF> like it adds on DesktopApplication so add it. [15] Also same goes for Sdk="Microsoft.NET.Sdk.WindowsDesktop" too. [16] As last step before write your own code create a class named AssemblyInfo.cs with filled with code: using System.Windows; [assembly: ThemeInfo( ResourceDictionaryLocation.None, ResourceDictionaryLocation.SourceAssembly )] Now you are good to go. If you do not do step [14] or [15] or [16] Rider cannot build and complain that you suck and do not deserve to use JetBrains Rider so you should go back to use Visual Studio like every regular boring developers do. (When Rider fixes this problem, this paragraph will destroy itself.). 🎩 JavaScript Enhanced Scss mixins! 🎩 concepts explained In the next post we are going to explore CSS @apply to supercharge what we talk about here....
https://practicaldev-herokuapp-com.global.ssl.fastly.net/erkantaylan/001-create-wpf-projects-like-a-pro-12ba
CC-MAIN-2020-24
en
refinedweb
android / platform / external / libxml2 / 18663755e0843154e8d2b8052d3475db9c176e20 / . / TODO blob: 9c322249824359d7ceb71adf9ffe6605220c12dc [ file ] [ log ] [ blame ] <grin/> - datatype are complete, but structure support is very limited. - extend the shell with: - edit - load/save - mv (yum, yum, but it's harder because directories are ordered in our case, mvup and mvdown would be required) Done: ===== - Add HTML validation using the XHTML DTD - problem: do we want to keep and maintain the code for handling DTD/System ID cache directly in libxml ? => not really done that way, but there are new APIs to check elements or attributes. Otherwise XHTML validation directly ... - XML Schemas datatypes except Base64 and BinHex - Relax NG validation - XmlTextReader streaming API + validation - Add a DTD cache prefilled with xhtml DTDs and entities and a program to manage them -> like the /usr/bin/install-catalog from SGML right place seems $datadir/xmldtds Maybe this is better left to user apps => use a catalog instead , and xhtml1-dtd package - Add output to XHTML => XML serializer automatically recognize the DTd and apply the specific rules. - Fix output of <tst val="x y"/> - compliance to XML-Namespace checking, see section 6 of - Correct standalone checking/emitting (hard) 2.9 Standalone Document Declaration - Implement OASIS XML Catalog support - Get OASIS testsuite to a more friendly result, check all the results once stable. the check-xml-test-suite.py script does this - Implement XSLT => libxslt - Finish XPath => attributes addressing troubles => defaulted attributes handling => namespace axis ? done as XSLT got debugged - bug reported by Michael Meallin on validation problems => Actually means I need to add support (and warn) for non-deterministic content model. - Handle undefined namespaces in entity contents better ... at least issue a warning - DOM needs int xmlPruneProp(xmlNodePtr node, xmlAtttrPtr attr); => done it's actually xmlRemoveProp xmlUnsetProp xmlUnsetNsProp - HTML: handling of Script and style data elements, need special code in the parser and saving functions (handling of < > " ' ...): Attributes are no problems since entities are accepted. - DOM needs xmlAttrPtr xmlNewDocProp(xmlDocPtr doc, const xmlChar *name, const xmlChar *value) - problem when parsing hrefs with & with the HTML parser (IRC ac) - If the internal encoding is not UTF8 saving to a given encoding doesn't work => fix to force UTF8 encoding ... done, added documentation too - Add an ASCII I/O encoder (asciiToUTF8 and UTF8Toascii) - Issue warning when using non-absolute namespaces URI. - the html parser should add <head> and <body> if they don't exist started, not finished. Done, the automatic closing is added and 3 testcases were inserted - Command to force the parser to stop parsing and ignore the rest of the file. xmlStopParser() should allow this, mostly untested - support for HTML empty attributes like <hr noshade> - plugged iconv() in for support of a large set of encodings. - xmlSwitchToEncoding() rewrite done - URI checkings (no fragments) rfc2396.txt - Added a clean mechanism for overload or added input methods: xmlRegisterInputCallbacks() - dynamically adapt the alloc entry point to use g_alloc()/g_free() if the programmer wants it: - use xmlMemSetup() to reset the routines used. - Check attribute normalization especially xmlGetProp() - Validity checking problems for NOTATIONS attributes - Validity checking problems for ENTITY ENTITIES attributes - Parsing of a well balanced chunk xmlParseBalancedChunkMemory() - URI module: validation, base, etc ... see uri.[ch] - turn tester into a generic program xmllint installed with libxml - extend validity checks to go through entities content instead of just labelling them PCDATA - Save Dtds using the children list instead of dumping the tables, order is preserved as well as comments and PIs - Wrote a notice of changes requires to go from 1.x to 2.x - make sure that all SAX callbacks are disabled if a WF error is detected - checking/handling of newline normalization - correct checking of '&' '%' on entities content. - checking of PE/Nesting on entities declaration - checking/handling of xml:space - checking done. - handling done, not well tested - Language identification code, productions [33] to [38] => done, the check has been added and report WFness errors - Conditional sections in DTDs [61] to [65] => should this crap be really implemented ??? => Yep OASIS testsuite uses them - Allow parsed entities defined in the internal subset to override the ones defined in the external subset (DtD customization). => This mean that the entity content should be computed only at use time, i.e. keep the orig string only at parse time and expand only when referenced from the external subset :-( Needed for complete use of most DTD from Eve Maler - Add regression tests for all WFC errors => did some in test/WFC => added OASIS testsuite routines - I18N: is not XML and accepted by the XML parser, UTF-8 should be checked when there is no "encoding" declared ! - Support for UTF-8 and UTF-16 encoding => added some convertion routines provided by Martin Durst patched them, got fixes from @@@ I plan to keep everything internally as UTF-8 (or ISO-Latin-X) this is slightly more costly but more compact, and recent processors efficiency is cache related. The key for good performances is keeping the data set small, so will I. => the new progressive reading routines call the detection code is enabled, tested the ISO->UTF-8 stuff - External entities loading: - allow override by client code - make sure it is alled for all external entities referenced Done, client code should use xmlSetExternalEntityLoader() to set the default loading routine. It will be called each time an external entity entity resolution is triggered. - maintain ID coherency when removing/changing attributes The function used to deallocate attributes now check for it being an ID and removes it from the table. - push mode parsing i.e. non-blocking state based parser done, both for XML and HTML parsers. Use xmlCreatePushParserCtxt() and xmlParseChunk() and html counterparts. The tester program now has a --push option to select that parser front-end. Douplicated tests to use both and check results are similar. - Most of XPath, still see some troubles and occasionnal memleaks. - an XML shell, allowing to traverse/manipulate an XML document with a shell like interface, and using XPath for the anming syntax - use of readline and history added when available - the shell interface has been cleanly separated and moved to debugXML.c - HTML parser, should be fairly stable now - API to search the lang of an attribute - Collect IDs at parsing and maintain a table. PBM: maintain the table coherency PBM: how to detect ID types in absence of DtD ! - Use it for XPath ID support - Add validity checking Should be finished now ! - Add regression tests with entity substitutions - External Parsed entities, either XML or external Subset [78] and [79] parsing the xmllang DtD now works, so it should be sufficient for most cases ! - progressive reading. The entity support is a first step toward asbtraction of an input stream. A large part of the context is still located on the stack, moving to a state machine and putting everyting in the parsing context should provide an adequate solution. => Rather than progressive parsing, give more power to the SAX-like interface. Currently the DOM-like representation is built but => it should be possible to define that only as a set of SAX callbacks and remove the tree creation from the parser code. DONE - DOM support, instead of using a proprietary in memory format for the document representation, the parser should call a DOM API to actually build the resulting document. Then the parser becomes independent of the in-memory representation of the document. Even better using RPC's the parser can actually build the document in another program. => Work started, now the internal representation is by default very near a direct DOM implementation. The DOM glue is implemented as a separate module. See the GNOME gdome module. - C++ support : John Ehresman <jehresma@dsg.harvard.edu> - Updated code to follow more recent specs, added compatibility flag - Better error handling, use a dedicated, overridable error handling function. - Support for CDATA. - Keep track of line numbers for better error reporting. - Support for PI (SAX one). - Support for Comments (bad, should be in ASAP, they are parsed but not stored), should be configurable. - Improve the support of entities on save (+SAX).
https://android.googlesource.com/platform/external/libxml2/+/18663755e0843154e8d2b8052d3475db9c176e20/TODO
CC-MAIN-2020-24
en
refinedweb
See starters using this Community PluginView plugin on GitHub @canrau/gatsby-plugin-react-head WARNING: The whitelist option introduced in react-head@next is not working properly at least not using the below mentioned whitelist. Please check my PR#84 over at react-head if you’re interested in the details. This plugin sets up react-head with server-rendering for you. More about thread-safe meta tag management with react-head Install npm install --save react-head@next @canrau/gatsby-plugin-react-head Note @canrau/gatsby-plugin-react-head depends on react-head! To use the new whitelist feature you have to specifically install react-head@next as shown above, the whitelist option hasn’t been merged into the main release so far as it’s still a proof of concept. The un-namespaced version gatsby-plugin-react-head doesn’t support the whitelist option. Configuration Add the plugin to your gatsby-config.js. module.exports = { plugins: [ { resolve: `@canrau/gatsby-plugin-react-head`, // optional options options: { // an array of whitelisted tags to disable `[data-rh]` attribute for them whitelist: [ `title`, `[name="description"]`, `[property^="og:"]`, `[property^="fb:"]`, ], }, }, ], } Usage import * as React from 'react'; import { Title, Link, Meta } from 'react-head'; const App = () => ( <> <Title>GaiAma.org</Title> <Link rel="canonical" content="" /> <Meta name="description" content="Protecting Amazonian rainforest in Peru" /> // ... </> );
https://www.gatsbyjs.org/packages/@canrau/gatsby-plugin-react-head/
CC-MAIN-2020-24
en
refinedweb
A static method or, block belongs to the class and these will be loaded into the memory along with the class. You can invoke static methods without creating an object. (using the class name as reference). Where, the "super" keyword in Java is used as a reference to the object of the super class. This implies that to use "super" the method should be invoked by an object, which static methods are not. Therefore, you cannot use "super" keyword from a static method. In the following Java program the class ThisExample contains a private variable name with setter and getter methods and an instance method display(). From the main method (which is static) we are trying to invoke the display() method using the this keyword. class SuperClass{ protected String name; } public class SubClass extends SuperClass { private String name; public static void setName(String name) { super.name = name; } public void display() { System.out.println("name: "+super.name); } public static void main(String args[]) { new SubClass().display(); } } SubClass.java:7: error: non-static variable super cannot be referenced from a static context super.name = name; ^ 1 error
https://www.tutorialspoint.com/why-can-t-we-use-the-super-keyword-is-in-a-static-method-in-java
CC-MAIN-2020-24
en
refinedweb
In the previous tutorial, we discussed multiplexing seven-segment displays (SSDs). Continuing with the display devices, in this tutorial, we will cover how to interface character LCD when using Arduino. Character LCDs are the most common display devices used in embedded systems. These low-cost LCDs are widely used in industrial and consumer applications. Display devices in embedded systems Most devices require some sort of display for various reasons. For example, an air-conditioner requires a display that indicates the temperature and AC settings. A microwave oven requires a display to present the selected timer, temperature, and cooking options. A car dashboard uses a display to track distance, fuel indication, mileage, and fuel efficiency. Even a digital watch requires a display to showcase the time, date, alarm, and modes. There are also several reasons industrial machinery and electrical or electronic devices require a display. The display devices used in embedded circuits — whether they are industrial devices, consumer electronic products, or fancy gadgets — are used either to indicate some information or to facilitate machine-human interface. For instance, LEDs are used as indicators of mutually exclusive conditions. The SSDs are used to display numeric information. The Liquid Crystal Displays (LCDs), TFTs, and OLED displays are used to present the more complicated information in embedded applications. Often, this complication arises due to the text or graphical nature of the information or the interface. LCDs are the most common display devices used in all sorts of embedded applications. There are two types of LCD displays available: 1. Character LCDs 2. Graphical LCDs. The character LCDs are used where the information or interface is of a textual nature. The graphical LCDs are used where the information or interface is of a graphical nature. The graphical LCDs that are used to design machine-human interfaces may also have touchscreens. Character LCDs Character LCDs are useful in showing textual information or to provide a text-based, machine-human interface. It’s even possible to display some minimal graphics on these LCDs. These are low-cost LCD displays that fit in a wide range of embedded applications. Generally, character LCDs do not have touchscreens. And unlike graphical LCDs, these LCDs do not have continuous pixels. Instead, the pixels on character LCDs are arranged as a group of pixels or dot-matrix of pixels of fixed dimensions. Each dot-matrix of a pixel is intended to display a text character. This group of pixels is usually of 5×7, 5×8, or 5×10 dimensions — where the first digit indicates the number of columns of pixels and the second digit indicates the number of rows of pixels. For example, if each character has 5×8 dimensions, then the character is displayed by illuminating 5 columns and 8 rows of pixels/dots. This may include pixels used to show the cursor. The character LCDs are classified by their size, which is expressed as the number of characters that can be displayed. The number of possible characters that can display at a time on the LCD is indicated as the number of columns of characters and the number of rows of characters. The common size of character LCDs is 8×1, 8×2, 10×2, 16×1, 16×2, 16×4, 20×2, 20×4, 24×2, 30×2, 32×2, 40×2, etc. For example, a 16×2 character LCD can display 32 characters at a time in 16 columns and 2 rows. Generally, characters are displayed as a matrix of black dots while the backlight of LCD may be a monochromatic color like blue, white, amber, or yellow-green. LCDs are available as one of three types: 1. Twisted Nematic (TN) 2. Super Twisted Nematic (STN) 3. Focus Super Twisted Nematic (FSTN). The character LCDs may use any one of these types. The TN types are low-cost but have a narrow viewing angle and low contrast. The FSTN offers the best contrast and widest viewing angle, but they are more costly. Even character LCDs that use the FSTN display are still cheaper in comparison to graphical LCDs, TFTs, and OLEDs. Most of the character LCDs use LED backlight and the backlight color can be white, blue, amber, or yellow-green. The other types of a backlight in character LCDs include EL, CCFL, internal power, external power, and 3.3 and 5V backlights. EL and LED backlights are the most common. The LCD may have a reflective, trans-reflective, or transmissive rear polarizer. The quality of display depends on the LCD type, the backlight, and the nature of a rear polarizer used in the LCD panel. When selecting an LCD panel for an embedded application, it’s important to decide on the quality of the LCD display, according to the requirements. This includes per the application, class of the device, nature of use (such as indoor or outdoor), target users of the device, intended user-experience, operating conditions (such as temperature and operating voltage), and cost limitations. For example, a character LCD that has to be used for the machine-human interface must have better contrast, a wide viewing angle, and a good backlight. The following table summarizes the important characteristics of any character LCD. Even on a character LCD, a large number of pixels have to be controlled to display the text. A 16×2 character LCD in which each character is 5×8 pixels means that a total of 1280 pixels (16×2 characters x 5×8 Pixels) have to be controlled. This requires interfacing the pixels across 16 rows (2 rows of characters x 8 rows in each character) and 80 columns (16 columns of characters x 5 columns in each character) of connections. This is when pixels are black dots and merely require switching either ON or OFF by the controller to display text characters. On a typical microcontroller, there are not these many I/O pins that can be dedicated to controlling the pixels of an LCD panel. That is why LCD modules have integrated controllers that control the pixels of the LCD. The integrated controller can interface with a microcontroller or a processor via an 8-bit/4-bit parallel port or a serial interface (like I2C). The integrated controller receives data and commands from the microcontroller/processor to display text on the LCD panel via a 4-bit/8-bit parallel or serial interface. In fact, the LCD module is a complete embedded system comprising of an LCD panel, LCD driver, LCD controller, LED Backlight, internal flags, Address Counter, Display Data RAM (DDRAM), Character Generator ROM (CGROM), Character Generator RAM (CGRAM), Data Register (DR), Instruction Register (IR), and Cursor Control Circuit. Functional blocks of the LCD module A character LCD module has these functional blocks: 1. LCD Panel. The character LCDs have the dot-matrix LCD panel. The text characters are displayed on the panel according to the commands and data received by the integrated controller. 2. System Interface. This module has a 4-bit and an 8-bit interface to connect with microcontrollers/processors. Some LCD modules also have a built-in serial interface (I2C) for communication with a controller. The selection of interface (4-bit or 8-bit) is determined by the DL bit of the Instruction Register (IR). 3. Data Register (DR). Data Register is an internal register that stores data received by the microcontroller via the system interface. The value populated in the data register is compared with character patterns in Character Generator ROM (CGROM) to generate different standard characters. 4. Instruction Register (IR). Instruction Register is an internal register that stores instructions received by the microcontroller via the system interface. 5. Character Generator ROM (CGROM). It’s an internal Read-Only Memory (ROM) on the LCD module where the patterns for the standard characters are stored. For example, a 16×2 LCD module, CGROM has 5×8 dots, 204 character patterns, and 5×10 dots of 32 characters pattern that are stored. So, the patterns for the 204 characters are permanently stored in the CGROM. 6. Character Generator RAM (CGRAM). The user-defined characters can also be displayed on a character LCD. The patterns for custom characters are stored in CGRAM. On the 16×2 LCD, 5 characters of the 5×8 pixels can be defined by a user program. The user needs to write the font data (which is the character pattern defining what pixels/dots must ON and which must OFF to properly display the character) to generate these characters. 7. Display Data RAM (DDRAM). The data sent to the LCD module by the microcontroller remains stored in DDRAM. In 16×2 character LCD, DDRAM can store a maximum of 80 8-bit characters where the maximum of 40 characters for each row can be stored. 8. Address Counter (AC). The Address Counter is an internal register that stores DDRAM/CGRAM addresses that are transferred by the Instruction register. The AC reads the DDRAM/CGRAM addresses from bits DB0-DB6 of the instruction register. After writing into the DDRAM/CGRAM, the AC is automatically increased by one, while after reading from the DDRAM/CGRAM, the AC is automatically decreased by one. 9. Busy Flag (BF). The bit DB7 of the instruction register is a busy flag of the LCD module. When the LCD is performing some internal operations, this flag is set (HIGH). During this time, the instruction register does not accept any new instruction via the system interface from the microcontroller. New instructions can be written to the IR but only when the busy flag is clear (LOW). 10. Cursor/Blink Control Circuit. This controls the ON/OFF status of the cursor/blink at the cursor position. The cursor appears at the DDRAM address currently set in the AC. For example, if the AC is set to 07H, then the cursor is displayed at the DDRAM address 07H. 11. LCD Driver. It controls the LCD panel and the display. In the 16×2 character LCD, the LCD driver circuit consists of 16 common signal drivers and 40 segment signal drivers. 12. Timing Generation Circuit. It generates the timing signals for the operation of internal circuits, such as the DDRAM, CGRAM, and CGROM. The timing signals for reading RAM (DDRAM/CGRAM) module are generated separately to display characters and timing signals for the internal operations of the integrated controller/processor of LCD. This is so that the display does not interfere with the internal operations of the integrated controller of the LCD module. Interfacing character LCDs Most of the character LCDs have a 14-pin or 16-pin system interface for communication with a microcontroller/processor. The 16-pin system interface is the most common. It has this pin configuration: The pin descriptions of the LCD module’s system interface are summarized in this table: To interface the LCD module with a microcontroller or Arduino, the digital I/O pins of the microcontroller must be connected with the RS, RW, EN, and data pins DB0 to DB7. Typically, Arduino (or any microcontroller) does not need to read data from the LCD module, so the RW pin can be hard-wired to ground. - If the LCD is interfaced with Arduino in an 8-bit mode, the RS, EN, and all of the data pins must be connected to the digital I/O pins of Arduino. - If the LCD is interfaced with Arduino in 4-bit mode, the RS, EN, and the data bits DB7 to DB4 must be connected to Arduino’s GPIO. In 4-bit mode, two pulses are required at the EN pin to write data/instruction to the LCD. At first, the higher nibble of data or the instruction is latched. Then, in the second pulse lower nibble of the data/instruction is transferred. In an 8-bit mode, the entire 8-bit data/instruction is written to the LCD in a single pulse at the EN pin. So, the 4-bit mode saves the microcontroller pins but has a slight latency in comparison to the 8-bit mode of operation. The 8-bit mode suffers less from latency but engages 4 extra pins from the microcontroller. It’s also possible to interface the LCD module with Arduino using a serial-to-parallel converter. Then, only two pins of Arduino are required to interface with the LCD module. The ground pin of the LCD module (pin 1) must be connected to the ground while the VCC pin (pin 2) must be connected to the supply voltage. The 3.3 or 5V pin of Arduino can be used to supply voltage to the LCD module. The VEE pin must be connected to the variable terminal of a variable resistor, and the fixed terminals of the variable resistor must be connected to the VCC and ground. The LED+ (pin 15) must be connected to the VCC via a current-limiting resistor and the LED (pin 16) must be connected to the ground. How character LCD works It is possible to read/write data with the LCD module. To write data/instructions to the LCD module, the RW pin must be clear. Then, if the RS is set, an 8-bit data sent by the microcontroller stores in the data register (DR) of the LCD module. This 8-bit data sent by the microcontroller will store in the instruction register (IR) of the LCD module. The data is transferred to the LCD from the microcontroller when a HIGH to LOW pulse is applied at the EN pin of the module. When data is sent to the LCD module (RW=0, RS=1, EN=1->0), it is written in the DDRAM and the Address Counter of the LCD is increased by one. The LCD controller compares the 8-bit data with the CGROM addresses and displays the appropriate character on the LCD at the associated DDRAM address. This serves as the instruction to show that the display has been received. When the instruction is sent to the LCD module (RW=0, RS=0, EN=1->0), it is stored in the instruction register and according to the pre-defined instruction set of the LCD controller, the appropriate operation is executed on the display (to set display ON, set display OFF, set cursor ON, set cursor OFF, clear DDRAM, etc.). Sometimes, the microcontroller may need to read data from the LCD. A microcontroller can read content from the instruction register, DDRAM, and CGRAM of the LCD. To read data from the LCD, the RW pin must be set. When the RW is set and the RS is clear, the microcontroller reads the content of Instruction Register (IR) — including the busy flag (DB7 of IR) and address counter (DB6 to DB0 of IR) — when applying a HIGH to LOW pulse at EN pin. When the RW is set and the RS is set, the microcontroller reads the content of the DDRAM or CGRAM according to the current value of the address counter when applying a HIGH to LOW pulse at EN pin. So, the microcontroller reads the content of the instruction register when: RW=1, RS=0, and EN=1->0. It reads the content of the DDRAM/CGRAM at the current address counter when: RW=1, RS=1, and EN=1->0. LCD characters set The following characters, with the given patterns and data register values, are supported on a 16×2 LCD. LCD commands A 16×2 LCD module supports the following an 8-bit commands: LCD functions using Arduino If the LCD module is interfaced to typical microcontrollers (8051, PIC, AVR, etc.), the RS, RW, EN, and the data bits need to be set individually to perform the read/write operations. Arduino has a Liquid Crystal library (LiquidCrystal.h) available that makes programming LCD with Arduino extremely easy. This library can be imported by the following statement: #include <LiquidCrystal.h> The library uses these methods to control a character LCD: 1. LiquidCrystal() 2. lcd.begin() 3. lcd.clear() 4. lcd.home() 5. lcd.setCursor(col, row) 6. lcd.write(data) 7. lcd.print(data)/lcd.print(data, BASE) 8. lcd.cursor() 9. lcd.noCursor() 10. lcd.blink() 11. lcd.noBlink() 12. lcd.display() 13. lcd.noDisplay() 14. lcd.scrollDisplayLeft() 15. lcd.scrollDisplayRight() 16. lcd.autoscroll() 17. lcd.noAutoscroll() 18. lcd.leftToRight() 19. lcd.rightToLeft() 20. lcd.createChar(num, data) LiquidCrystal() method This method is used to create a Liquid Crystal object. The object must be created according to the circuit connections of the LCD module when using Arduino. The object takes the pin numbers of the Arduino as arguments. The pin numbers where the RS, RW, EN, and the data pins (DB7-DB0 for 8-bit mode and DB7-DB4 for 4-bit mode) of the LCD are connected, has to be passed as arguments in the object definition. This method has this syntax: If the LCD is connected in a 4-bit mode and the R/W pin is grounded: LiquidCrystal(rs, enable, d4, d5, d6, d7); or LiquidCrystal lcd(rs, enable, d4, d5, d6, d7); If the LCD is connected in a 4-bit mode and the R/W pin is also connected to Arduino: LiquidCrystal(rs, rw, enable, d4, d5, d6, d7); or LiquidCrystal lcd(rs, rw, enable, d4, d5, d6, d7); If the LCD is connected in an 8-bit mode and the R/W pin is grounded: LiquidCrystal(rs, enable, d0, d1, d2, d3, d4, d5, d6, d7); or LiquidCrystal lcd(rs, enable, d0, d1, d2, d3, d4, d5, d6, d7); If the LCD is connected in an 8-bit mode and the R/W pin is connected to Arduino: LiquidCrystal(rs, rw, enable, d0, d1, d2, d3, d4, d5, d6, d7); or LiquidCrystal lcd(rs, rw, enable, d0, d1, d2, d3, d4, d5, d6, d7); The LiquidCrystal class has this source code: The LiquidCrystal method has this definition in the source code: lcd.begin() method This method is used to initialize the LCD module. The function takes the size of the LCD (expressed by number of columns and rows in the LCD) as the arguments. It has this syntax: lcd.begin(cols, rows) This function has the following source code: lcd.clear() method This method clears the LCD display and positions the cursor to the top-left corner. It has this syntax: lcd.clear() This function has the following source code: lcd.setCursor() method This method positions the cursor at the given location on the LCD panel. It takes the column and row as the argument where the cursor has to be placed and a subsequent character has to be displayed. It has this syntax: lcd.setCursor(col, row) This method has the following source code: lcd.print() method This method is used to print text to the LCD. It takes a string argument, which has to be displayed at the current cursor position on the LCD. It can take base of the value passed as an optional argument — if only printing numbers. It has this syntax: lcd.print(data) lcd.print(data, BASE) This method comes from Print.h library that is included in the LiquidCrystal.h library. This method has the following source code: How to check the LCD A common concern when interfacing the LCD module is to identify whether or not the LCD module is, indeed, working. When connecting the LCD with Arduino (or any other MCU), if only the lower line of the LCD brightens, then the LCD module is working. When connecting LCD with Arduino, if both lines of the LCD (16×2 LCD) brightens, then the LCD is not working properly. Sometimes when you try to print on the LCD, nothing occurs, except the lower line of the LCD illuminating. In this case, the possible reasons can be one of the following: 1. There may be loose connections between Arduino (MCU) and the LCD module. 2. The LCD module might have been interfaced in the reverse pin order (i.e. instead of pins 1 to 16, circuit connections might have been made from pins 16 to 1 of the LCD module). 3. There may be shorting between LCD terminals due to faulty soldering. 4. The contrast of the LCD at the VEE pin might not have been adjusted properly. If the adjustment of contrast does not work, try connecting the VEE pin directly to the ground, so that the LCD module is adjusted to maximum contrast. 5. If after checking all the circuit connections, LCD panel still does not display text, check if the code uploaded to Arduino is correct or not. For example, it is possible that if the LCD display is not cleared after initialization, garbage values may display on the LCD instead of the intended text. Recipe: Printing text on the 16X2 character LCD In this tutorial, we will print simple text on the 16×2 LCD panel from Arduino UNO. Components required 1. Arduino UNO x1 2. 16×2 character LCD x1 3. 10K Pot x1 4. 4-bit mode. Pin 1 (GND) and 16 (LED) of the LCD module are connected to ground while pin 2 (VCC) is connected to the VCC. The pin 15 (LED+) from the LCD module is, once again, connected to the VCC via a small-value resistor. The pin 3 (VEE) is connected to the variable terminal of a pot while the fixed terminals of the pot are connected to the ground and VCC. The R/W pin is connected to the ground as Arduino will only write data to the LCD module. The RS, EN, DB4, DB5, DB6, and DB7 pins of the LCD are connected to pins 13, 11, 7, 6, 5, and 4 of Arduino UNO, respectively. The breadboard supplies the common ground. The 5V supplies the rail from one of the ground pins and 5V pin of the Arduino UNO, respectively. Circuit diagram Arduino sketch How the project works The LCD module is connected with Arduino in a 4-bit mode. First, the LCD is initialized and the display is cleared to get rid of any garbage values in the DDRAM. The cursor is set to column 1 of the line 0, and the text, “EEWORLDONLINE” is printed on LCD. Next, the cursor is moved to column 0 of line 1 and text, “EngineersGarage” is printed on the LCD. A delay of 750 milliseconds is given and the LCD is cleared again. The cursor is moved to column 0 of the line 0 and the text, “EngineersGarage” is printed on the LCD. The cursor is then moved to column 1 of line 1 and text, “EEWORLDONLINE” is printed on the LCD. Arduino UNO will keep repeating the code and, alternatively, printing both of the text strings on the lines 0 and 1. Programming guide The LiquidCrystal.h library is imported in the code. Then, an object defined by the variable “lcd” is defined for the LiquidCrystal class. #include <LiquidCrystal.h> //LiquidCrystal lcd(RS, E, D4, D5, D6, D7); LiquidCrystal lcd(13, 11, 7, 6, 5, 4); In the setup() function, the LCD is initialized to the 16×2-size, using the begin() method like this: void setup() { lcd.begin(16, 2); } In the loop() function, the LCD display is cleared using the clear() method and he cursor is set at column 1 of line 0 by using the setCursor() method. The text “EEWORLDONLINE” is printed using the print() method on the “lcd” object. Similarly, the text “EngineersGarage” is printed at column 0 of line 1. A delay of 750 milliseconds is given by using the delay() function. void loop() { lcd.clear(); lcd.setCursor(1, 0); lcd.print(“EEWORLDONLINE”); lcd.setCursor(0, 1); lcd.print(“EngineersGarage”); delay(750); Next, the LCD display is cleared again and the position of both texts is inversed. lcd.clear(); lcd.setCursor(0, 0); lcd.print(“EngineersGarage”); lcd.setCursor(1, 1); lcd.print(“EEWORLDONLINE”); delay(750); } The body of the loop() function will keep repeating itself until Arduino is shutdown. Therefore, both texts keep displaying on the LCD module, alternating their position between line 0 and 1 of the panel. In the next tutorial, we will discuss how to use scrolling text on the LCD module.
https://www.engineersgarage.com/microcontroller-projects/articles-arduino-16x2-character-lcd-interfacing-driver/
CC-MAIN-2020-24
en
refinedweb
List-Item-Attachments control Posted on 5 de Novembro de 2018. SPFx ListItemPicker Control Posted on 22 de Outubro de 2018 ListItemPicker control This control allows you to select one or more item from list, based in a column value, the control suggest values based on characters typed Here is an example of the control: Can find the source code here : How to use this control in your solutions - run npm i list-item-picker –save - Import the control into your component: import { ListItemPicker } from 'list-item-picker/lib/listItemPicker'; - Use the ListItemPickercontrol in your code as follows: <ListItemPicker listId='da8daf15-d84f-4ab1-9800-7568f82fed3f' columnInternalName='Title' itemLimit={2} onSelectedItem={this.onSelectedItem} context={this.props.context} /> Control Properties Bulk Upload Files with Metadata to SharePoint Online with PnP-Powershell_3<< My PowerShell script Print Screens, You must change the script for your needs, use as a reference . The code is here : Create Client Side App to SP2013 Using TypeScript, React and @PnP/PnPjs Posted on 25 de Setembro de 2018 1 Comentário -_8<< Sample of my service.ts class: I use a class for some staff I use in my components, that I called util.ts Here my Application App.tsx _13<<:
https://joaojmendes.com/page/2/
CC-MAIN-2020-24
en
refinedweb
Design Vorlage Magazin have some pictures that related each other. Find out the most recent pictures of Design Vorlage Magazin here, and also you can have the picture here simply. Design Vorlage Magazin picture published ang uploaded by Admin that preserved in our collection. Design Vorlage Magazin have an image from the other. Design Vorlage Magazin It also will include a picture of a kind that might be seen in the gallery of Design Vorlage Magazin. The collection that consisting of chosen picture and the best among others. They are so many great picture list that could become your motivation and informational reason forDesign Vorlage MagazDesign Vorlage Magazin picture. We offer image Design Vorlage Magazin is similar, because our website give attention to this category, users can navigate easily and we show a straightforward theme to find images that allow a individual to find, if your pictures are on our website and want to complain, you can record a grievance by sending an email can be found. The assortment of images Design Vorlage Magazin that are elected straight directly, you may use the category navigation or it could be using a random post of Design Vorlage Magazin. We hope you love and find one of our best assortment of pictures and get motivated to beautify your residence. If the hyperlink is destroyed or the image not entirely onDesign Vorlage Magazinyou can contact us to get pictures that look for We offer imageDesign Vorlage Magazin : bank credit letter supplier credit letter response credit letter westpac credit letter volunteer credit letter negotiation credit letter import export credit letter pdf credit letter xcel energy credit letter proforma invoice credit letter local credit letter commercial invoice credit letter bank america credit letter deutsche bank credit letter transferable credit letter documentary credit letter uob credit letter us bank credit letter notification credit letter refund credit letter original credit letter malaysia credit letter ocbc credit letter appreciation credit letter musharakah credit letter islamic credit letter personal credit letter rbs credit letter maybank credit letter acknowledgement credit letter
http://www.pipeda.info/design-vorlage-magazin.html
CC-MAIN-2018-13
en
refinedweb