https://chaseonsoftware.com/most-common-programming-case-types/#javascript-conventions

Most Common Programming Case Types

Published

2018/08/06

Share: Twitter

When working with computers—specifically while programming—you'll inevitably find yourself naming things (one of the two hard things in computer science).

A major factor of being successful in naming is knowing the type of case you want to use so that you can have a consistent convention per project/workspace. If you're writing software, you'll come across at least one of these in a languages specification for how it's written. Some languages (Go, particularly) rely heavily on you knowing the difference between two of them and using them correctly!



What You'll Learn



camelCase

camelCase must (1) start with a lowercase letter and (2) the first letter of every new subsequent word has its first letter capitalized and is compounded with the previous word.

An example of camel case of the variable camel case var is camelCaseVar.



snake_case

snake_case is as simple as replacing all spaces with a "_" and lowercasing all the words. It's possible to snake_case and mix camelCase and PascalCase but imo, that ultimately defeats the purpose.

An example of snake case of the variable snake case var is snake_case_var.



kebab-case

kebab-case is as simple as replacing all spaces with a "-" and lowercasing all the words. It's possible to kebab-case and mix camelCase and PascalCase but that ultimately defeats the purpose.

An example of kebab case of the variable kebab case var is kebab-case-var.



PascalCase

PascalCase has every word starts with an uppercase letter (unlike camelCase in that the first word starts with a lowercase letter).

An example of pascal case of the variable pascal case var is PascalCaseVar.

Note: It's common to see this confused for camel case, but it's a separate case type altogether.



UPPER

UPPER_CASE_SNAKE_CASE is replacing all the spaces with a "_" and converting all the letters to capitals.

an example of upper case snake case of the variable upper case snake case var is UPPER_CASE_SNAKE_CASE_VAR.



Which case type should I use?

Now that you know the various case types, let's tackle an example of my recommended best practice for filenames and when to use each case for Go, JavaScript, Python & Ruby.

What convention should I use when naming files?

Recommendation: always snake case

When naming files, it's important to ask "what's the lowest common denominator?" If you're not opinionated, I've found I've had the most success with snake case because it's the least likely to create a problem across filesystems and keeps filenames readable for "my_awesome_file".

If you're a Mac user or work with Mac users, it's a good practice to always use lowercase. Mac's have an HFS+ filesystem and since HFS+ is not case sensitive, it can read "MyFile" or "myfile" as "myfile".

My predominant argument for this stems from a particularly insidious "bug" I saw when I was running a CI/CD (continuous integration/continuous delivery) cluster. A CI job failed with "file not found: mycomponent.js" during a build for a React project. The developer swore the file was in the project's source, and as I dug through it, I noticed they had an import for "mycomponenet.js" but the file was named "MyComponent.js" (for a React project, where PascalCase is the convention for naming component files). Due to the way HFS+ handles file casing, it happily accepted that "MyComponent.js" was "mycomponent.js" at the time the developer (using a Mac) was writing the code, but ath the time the Unix based CI server was building it, it would fail because it expected exact casing to find the file.



Go Conventions

Go is the language where it's most critical to pay attention to case type conventions. The language decides whether a variable, field or method should be available to a package caller by if the name starts with a capital or lowercase letter.

  • Pascal case is required for exporting fields and methods in Go
  • Camel case is required for internal fields and methods in Go

 

package casetypes

type ExportedStruct {

unexportedField string

}

In the above example, ExportedStruct is available to package callers for casetypes and unexportedField is only available to methods on ExportedStruct.




Javascript Conventions

  • Camel case for variables and methods.
  • Pascal case for types and classes in JavaScript.
  • Upper case snake case for constants.

React Conventions

I write enough React and it's unique enough that it's worth calling out conventions here as a subsection:

  • Pascal case is used for component names and file names in React.



Ruby Conventions

  • Pascal case is used for classes and modules in Ruby.
  • Snake case for symbols, methods and variables.
  • Upper case snake case for constants.



Python Conventions



Other Conventions

  • kebab case in Lisp.
  • kebab case in HTTP URLs (most-common-programming-case-types/).
  • snake case in JSON property keys.



Quick Comparison Table

Case Type

Example

Original Variable as String

some awesome var

Camel Case

someAwesomeVar

Snake Case

some_awesome_var

Kebab Case

some-awesome-var

Pascal Case

SomeAwesomeVar

Upper Case Snake Case

SOME_AWESOME_VAR

Now that you've been introduced to the most common case types, you're prepared to hop into most of the popular languages and know what conventions to keep when you're writing your own code!

 

貼り付け元  <https://chaseonsoftware.com/most-common-programming-case-types/#javascript-conventions>


'C Lang' 카테고리의 다른 글

Agile manifesto(애자일 선언문)  (0) 2020.06.02
Redmine의Wiki 폴더구성(예시)  (0) 2019.09.17
개발 명명 규칙  (0) 2019.06.07
개발 프로그램 일기  (0) 2018.06.19



EC2にNginx + Gunicorn + SupervisorでDjangoアプリケーションをデプロイする

この記事は最終更新日から1年以上が経過しています。

Nginx + Gunicorn + Supervisorの組み合わせでDjangoアプリケーションを立ち上げたので手順のメモ
今回はOSに何も入っていない状態から始めていきます

環境

OS: Amazon Linux AMI
Python: 3.6.1
Django: 1.11.4
Nginx: 1.10.3
Gunicorn: 19.7.1
Supervisor: 3.3.3

Nginxのインストール

nginxのインストール

$ sudo yum install nginx

Amazon LinuxにNgnixがない場合は、下記のおすすめコマンドで実行

$ sudo amazon-linux-extras install nginx1.12

nginx起動する

$ sudo nginx

nginx自動起動設定

$ sudo chkconfig --add nginx
$ sudo chkconfig nginx on

*注意:Nginxは80ポートを使うので下記のコマンドから80番をすでに使っているプロセスをKillした後、Nginxを起動する


$ sudo netstat -ltnp tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 3000/rpcbind tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 12056/httpd tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 3537/sshd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 3493/master tcp 0 0 127.0.0.1:6379 0.0.0.0:* LISTEN 3399/redis-server 1 tcp6 0 0 :::111 :::* LISTEN 3000/rpcbind tcp6 0 0 :::80 :::* LISTEN 12056/nginx: master tcp6 0 0 :::22 :::* LISTEN 3537/sshd $ ps -ef | grep nginx root 12056 1 0 09:08 ? 00:00:00 httpd: master process httpd nginx 12057 12056 0 09:08 ? 00:00:00 httpd: worker process nginx 12058 12056 0 09:08 ? 00:00:00 httpd: worker process nginx 12059 12056 0 09:08 ? 00:00:00 httpd: worker process nginx 12060 12056 0 09:08 ? 00:00:00 httpd: worker process ec2-user 12165 11094 0 09:17 pts/1 00:00:00 grep --color=auto httpd $ kill -9 12056 $ kill -9 12057 $ kill -9 12058 $ kill -9 12059 $ kill -9 12060


自動起動設定確認
以下のようになっていればok

$ chkconfig | grep nginx
nginx           0:off   1:off   2:on    3:on    4:on    5:on    6:off

http://ipアドレスにアクセスしちゃんと起動しているか確認する
以下の通りになっていればOK
スクリーンショット 2017-08-03 13.40.56.png

Python環境の構築

今回はAnacondaで構築した
こちらからPython 3.6 versionをダウンロードする
ダウンロードしたパッケージをCyberduckなどのFTPツールで/home/ec2-userにアップロードする

アップロード完了したら下記コマンドでAnacondaインストールする

$ bash Anaconda3-4.4.0-Linux-x86_64.sh

インストール完了後、Anacondaのコマンドが使えるようにPATHを通す

$ export PATH="$PATH:/home/ec2-user/anaconda3/bin"

condaのコマンドを打って確認

$ conda info -e
# conda environments:
#
root                  *  /home/ec2-user/anaconda3

良さげです

なお、annacondaインストール時にbashrcに環境変数の設定を追加していると

Do you wish the installer to prepend the Anaconda3 install location
to PATH in your /root/.bashrc ? [yes|no]
[no] >>> yes

ルート環境のpythonも3.6になっている

$ python --version
Python 3.6.1 :: Anaconda 4.4.0 (64-bit)

Djangoプロジェクトの作成

今回は直接EC2上でプロジェクトを作成します
本来はローカルで開発したDjangoアプリケーションをgit cloneすべき
また、DBもデフォルトのSQliteを使用しますが、実際のサービスを公開するにはPostgresqlやMariaDBを使う

まずはDjangoのインストール
root環境で動かすかどうかは少し議論の分かれるところで、別に環境を作ってDjangoを動かした方がいいんじゃないか、と思ったりもしますが、とりあえず今回はroot環境でインストールしてしまいます

$ pip install django

問題なければプロジェクトを作成

$ django-admin startproject test_project

プロジェクトが作られていることを確認

$ ls -ltr
total 511032
-rw-rw-r--  1 ec2-user ec2-user 523283080 Aug  3 04:50 Anaconda3-4.4.0-Linux-x86_64.sh
drwxrwxr-x 20 ec2-user ec2-user      4096 Aug  3 04:53 anaconda3
drwxrwxr-x  3 ec2-user ec2-user      4096 Aug  3 05:05 test_project

/test_project/test_project/settings.pyのALLOW HOSTを下記の通り編集しておく

settings.py
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True

ALLOWED_HOSTS = ["サーバのIPアドレス"]

下記のコマンドでDjangoを起動
デフォルトだと127.0.0.1:8000がbindアドレスとして使用されているため、オプションで0.0.0.0:8000を追加する必要があります
また、事前にAWSのセキュリティグループで8000番ポートを解放しておく必要があります

$ cd test_project
$ python manage.py runserver 0.0.0.0:8000

そして、http://IPアドレス:8000にアクセスすると以下の通りDjangoアプリケーションにアクセスできる
スクリーンショット 2017-08-03 15.23.25.png

Gunicornのインストール

GunicornはPython製のWSGIサーバ
WSGIサーバというのはWebサーバとWebアプリケーションをつなぐサーバのこと
なので、Nginx <-> Gunicorn <-> Djangoというような構成をイメージしていただければと

まずはGunicornのインストールをやっていく

$ pip install gunicorn

インストールされたらGunicornでDjangoを起動させる

$ gunicorn test_project.wsgi --bind=0.0.0.0:8000

先程と同様、http://IPアドレス:8000にアクセスするとDjangoアプリケーションに接続できる
なお、settings.pyを本番用とか開発用で分けている場合は以下のような感じで

$ gunicorn test_project.wsgi.wsgi --env DJANGO_SETTINGS_MODULE=test_project.settings_dev --bind=0.0.0.0:8000

Nginxの設定の変更

/etc/nginx.confを以下の通り編集する

/etc/nginx.conf
〜中略〜

http {
    〜中略〜

    upstream app_server {
        server 127.0.0.1:8000 fail_timeout=0;
    }

    server {
        #以下4行はコメントアウト
        #listen       80 default_server;
        #listen       [::]:80 default_server;
        #server_name  localhost;
        #root         /usr/share/nginx/html;

        # 以下3行を追加
        listen    80;
        server_name     IPアドレス or ドメイン;
        client_max_body_size    4G;

        # Load configuration files for the default server block.
        include /etc/nginx/default.d/*.conf;

        location / {
            # 以下4行を追加
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header Host $http_host;
            proxy_redirect off;
            proxy_pass   http://app_server;
        }

    〜以下略〜

編集したら下記コマンドでnginxを再起動する

$ sudo service nginx restart
Stopping nginx:                                            [  OK  ]
Starting nginx:                                            [  OK  ]

これでNginxでのリバースプロキシの設定は完了
今回はnginx.confを直接編集したけれど、設定ファイルをどこか別のところに書いてそれを読み込ませる、という方法でもOK

その後、DjangoをGunicornで立ち上げる

$ gunicorn test_project.wsgi --bind=0.0.0.0:8000

次はhttp://IPアドレスにアクセスするとDjangoの画面が表示されるはず

Supervisorでプロセスをデーモン化する

今の状態だとGunicornのコマンドを中止したり、サーバからログアウトするとアプリケーションが停止してしまう
これを解消するためにSupervisorでGunicornのプロセスをデーモン化する

早速、Supervisorをインストール、としたいところだけれど、SupervisorはPython2系でしか動作しない
そのためAnacondaでPython2系の仮想環境を構築し、その環境にSupervisorをインストールしていく

まずは下記コマンドでSupervisor用のPython2系の仮想環境を作る

$ conda create -n supervisor python=2.7

python2系の環境に切り替えてpipでsupervisorをインストール

$ source activate supervisor
$ pip install supervisor

問題なくインストールできたら、supervisorの設定ファイルを作成し、それを/etc配下に配置する

$ echo_supervisord_conf > supervisord.conf
$ sudo mv supervisord.conf /etc

次にsupervisorの設定を行うため、supervisord.confを下記の通り編集

supervisord.conf
〜中略〜
[supervisord]
logfile=/var/log/supervisord.log ; ログの場所を変更
;logfile=/tmp/supervisord.log ; main log file; default $CWD/supervisord.log #コメントアウト
logfile_maxbytes=50MB        ; max main logfile bytes b4 rotation; default 50MB
logfile_backups=10           ; # of main logfile backups; 0 means none, default 10
loglevel=info                ; log level; default info; others: debug,warn,trace
pidfile=/var/run/supervisord.pid ; 追記
;pidfile=/tmp/supervisord.pid ; supervisord pidfile; default supervisord.pid #コメントアウト

〜中略〜
# includeはコメントアウトされているのでコメント外す
[include]
files = supervisord.d/*.conf ; 起動するプロセスのconfファイルの配置場所
;files = relative/directory/*.ini

ログファイルは作っておき、パーミッションも設定しておく

$ sudo touch /var/log/supervisord.log
$ sudo chown ec2-user /var/log/supervisord.log
$ sudo chgrp ec2-user /var/log/supervisord.log
$ sudo chmod 774 /var/log/supervisord.log

あと、ログローテションも設定しておく

$ sudo sh -c "echo '/var/log/supervisord.log {
       missingok
       weekly
       notifempty
       nocompress
}' > /etc/logrotate.d/supervisor"

次にデーモン化するプロセスのコマンドを記載したファイルを作っていく
まずはそれらのファイルを配置するディレクトリを作成

$ sudo mkdir /etc/supervisord.d

/etc/supervisord.d配下にdjango_app.confを作成
ここに、Gunicornのプロセスをデーモン化するための設定を以下のように書く

django_app.conf
[program:django_app]
directory=/home/ec2-user/test_project
command=gunicorn test_project.wsgi --bind=0.0.0.0:8000
numprocs=1
autostart=true
autorestart=true
user=ec2-user
redirect_stderr=true

directoryに実行するディレクトリを指定、commandのところにプロセスを起動するためのコマンドを指定する

ここまでできたら下記のコマンドでsupervisorを立ち上げる

$ supervisord

次に、confファイルを読み込ませる
こちらは仮にconfを修正などする場合は必ず実行する

$ supervisorctl reread

なお、下記のコマンドでデーモンを再起動し、そのタイミングでもconfが読み込まれたりする

$ supervisorctl reload

下記のコマンドでGunicornのプロセスをデーモン化する

$ supervisorctl start django_app

仮にdjango_app: ERROR (already started)というメッセージが出た場合は、以下のコマンドでプロセスの再起動をしたり、停止をしてからstartしたりする

$ supervisorctl stop django_app # 停止
$ supervisorctl restart django_app # 再起動

さて、この状態でサーバからログアウトしてみる
そして、http://IPアドレスにアクセスすると、Djangoの画面が表示される
GunicornのプロセスがSupervisorによりデーモン化されていることになる

よかったですね


https://stackoverflow.com/questions/39980323/are-dictionaries-ordered-in-python-3-6


Asked 
Active 17 days ago
Viewed 71k times
349

Dictionaries are ordered in Python 3.6 (under the CPython implementation at least) unlike in previous incarnations. This seems like a substantial change, but it's only a short paragraph in the documentation. It is described as a CPython implementation detail rather than a language feature, but also implies this may become standard in the future.

How does the new dictionary implementation perform better than the older one while preserving element order?

Here is the text from the documentation:

dict() now uses a “compact” representation pioneered by PyPy. The memory usage of the new dict() is between 20% and 25% smaller compared to Python 3.5. PEP 468 (Preserving the order of **kwargs in a function.) is implemented by this. The order-preserving aspect of this new implementation is considered an implementation detail and should not be relied upon (this may change in the future, but it is desired to have this new dict implementation in the language for a few releases before changing the language spec to mandate order-preserving semantics for all current and future Python implementations; this also helps preserve backwards-compatibility with older versions of the language where random iteration order is still in effect, e.g. Python 3.5). (Contributed by INADA Naoki in issue 27350. Idea originally suggested by Raymond Hettinger.)

Update December 2017: dicts retaining insertion order is guaranteed for Python 3.7

  • 2
    See this thread on Python-Dev mailing-list : mail.python.org/pipermail/python-dev/2016-September/146327.htmlif you haven't seen it ; it's basically a discussion around these subjects. – mgc Oct 11 '16 at 15:11
  • 5
    Notice that a long time ago (2003), Perl implementers decided to make hash tables (equivalent to Python dictionaries) not only explicitly unordered, but randomized for security reasons (perldoc.perl.org/perlsec.html#Algorithmic-Complexity-Attacks). So I would definitely not count on this "feature", because if experience of others may be a guide, it's probably deemed to be reversed at some point... – wazooxOct 19 '16 at 13:50
  • 1
    If kwargs are now supposed to be ordered (which is nice idea) and kwargs are dict, not OrderedDict, then I guess one could assume that dict keys will stay ordered in the future version of Python, despite the documentation says otherwise. – Dmitriy Sintsov Jan 12 '17 at 12:32
  • 4
    @DmitriySintsov No, don't make that assumption. This was an issue brought up during the writing of the PEP that defines order preserving feature of **kwargs and as such the wording used is diplomatic: **kwargs in a function signature is now guaranteed to be an insertion-order-preserving mapping. They've used the term mapping in order to not force any other implementations to make the dict ordered (and use an OrderedDictinternally) and as a way to signal that this isn't supposed to depend on the fact that the dict is not ordered. – Jim Fasarakis Hilliard Feb 4 '17 at 17:18
  • 6
    A good video explanation from Raymond Hettinger – Alex Jul 22 '17 at 16:38
382

Are dictionaries ordered in Python 3.6+?

They are insertion ordered[1]. As of Python 3.6, for the CPython implementation of Python, dictionaries remember the order of items insertedThis is considered an implementation detail in Python 3.6; you need to use OrderedDict if you want insertion ordering that's guaranteed across other implementations of Python (and other ordered behavior[1]).

As of Python 3.7, this is no longer an implementation detail and instead becomes a language feature. From a python-dev message by GvR:

Make it so. "Dict keeps insertion order" is the ruling. Thanks!

This simply means that you can depend on it. Other implementations of Python must also offer an insertion ordered dictionary if they wish to be a conforming implementation of Python 3.7.


How does the Python 3.6 dictionary implementation perform better[2] than the older one while preserving element order?

Essentially, by keeping two arrays.

  • The first array, dk_entries, holds the entries (of type PyDictKeyEntry) for the dictionary in the order that they were inserted. Preserving order is achieved by this being an append only array where new items are always inserted at the end (insertion order).

  • The second, dk_indices, holds the indices for the dk_entries array (that is, values that indicate the position of the corresponding entry in dk_entries). This array acts as the hash table. When a key is hashed it leads to one of the indices stored in dk_indices and the corresponding entry is fetched by indexing dk_entries. Since only indices are kept, the type of this array depends on the overall size of the dictionary (ranging from type int8_t(1 byte) to int32_t/int64_t (4/8 bytes) on 32/64 bit builds)

In the previous implementation, a sparse array of type PyDictKeyEntry and size dk_size had to be allocated; unfortunately, it also resulted in a lot of empty space since that array was not allowed to be more than 2/3 * dk_size full for performance reasons. (and the empty space still had PyDictKeyEntry size!).

This is not the case now since only the required entries are stored (those that have been inserted) and a sparse array of type intX_t (X depending on dict size) 2/3 * dk_sizes full is kept. The empty space changed from type PyDictKeyEntry to intX_t.

So, obviously, creating a sparse array of type PyDictKeyEntry is much more memory demanding than a sparse array for storing ints.

You can see the full conversation on Python-Dev regarding this feature if interested, it is a good read.


In the original proposal made by Raymond Hettinger, a visualization of the data structures used can be seen which captures the gist of the idea.

For example, the dictionary:

d = {'timmy': 'red', 'barry': 'green', 'guido': 'blue'}

is currently stored as:

entries = [['--', '--', '--'],
           [-8522787127447073495, 'barry', 'green'],
           ['--', '--', '--'],
           ['--', '--', '--'],
           ['--', '--', '--'],
           [-9092791511155847987, 'timmy', 'red'],
           ['--', '--', '--'],
           [-6480567542315338377, 'guido', 'blue']]

Instead, the data should be organized as follows:

indices =  [None, 1, None, None, None, 0, None, 2]
entries =  [[-9092791511155847987, 'timmy', 'red'],
            [-8522787127447073495, 'barry', 'green'],
            [-6480567542315338377, 'guido', 'blue']]

As you can visually now see, in the original proposal, a lot of space is essentially empty to reduce collisions and make look-ups faster. With the new approach, you reduce the memory required by moving the sparseness where it's really required, in the indices.


[1]: I say "insertion ordered" and not "ordered" since, with the existence of OrderedDict, "ordered" suggests further behavior that the dict object doesn't provide. OrderedDicts are reversible, provide order sensitive methods and, mainly, provide an order-sensive equality tests (==!=). dicts currently don't offer any of those behaviors/methods.


[2]: The new dictionary implementations performs better memory wise by being designed more compactly; that's the main benefit here. Speed wise, the difference isn't so drastic, there's places where the new dict might introduce slight regressions (key-lookups, for example) while in others (iteration and resizing come to mind) a performance boost should be present.

Overall, the performance of the dictionary, especially in real-life situations, improves due to the compactness introduced.

  • 7
    So, what happens when an item is removed? is the entries list resized? or is a blank space kept? or is it compressed from time to time? – njzk2 Oct 11 '16 at 19:19
  • 10
    @njzk2 When an item is removed, the corresponding index is replaced by DKIX_DUMMY with a value of -2and the entry in the entry array replaced by NULL, when inserting is performed the new values are appended to the entries array, Haven't been able to discern yet, but pretty sure when the indices fills up beyond the 2/3 threshold resizing is performed. This can lead to shrinking instead of growing if many DUMMY entries exist. – Jim Fasarakis Hilliard Oct 11 '16 at 20:03 
  • 3
    @Chris_Rands Nope, the only actual regression I've seen is on the tracker in a message by Victor. Other than that microbenchmark, I've seen no other issue/message indicating a serious speed difference in real-life work loads. There's places where the new dict might introduce slight regressions (key-lookups, for example) while in others (iteration and resizing come to mind) a performance boost would be present. – Jim Fasarakis HilliardMar 14 '17 at 13:26
  • 2
    Correction on the resizing part: Dictionaries don't resize when you delete items, they re-calculate when you re-insert. So, if a dict is created with d = {i:i for i in range(100)} and you .pop all items w/o inserting, the size won't change. When you add to it again, d[1] = 1, the appropriate size is calculated and the dict resizes. – Jim Fasarakis Hilliard Aug 2 '17 at 19:47 
  • 4
    @Chris_Rands I'm pretty sure it is staying. The thing is, and the reason why I changed my answer to remove blanket statements about 'dict being ordered', dicts aren't ordered in the sense OrderedDicts are. The notable issue is equality. dicts have order insensitive ==OrderedDicts have order sensitive ones. Dumping OrderedDicts and changing dicts to now have order sensitive comparisons could lead to a lot of breakage in old code. I'm guessing the only thing that might change about OrderedDicts is its implementation. – Jim Fasarakis Hilliard Apr 10 '18 at 16:57 


https://tecadmin.net/setup-autorun-python-script-using-systemd/


How To Setup Autorun a Python Script Using Systemd

Question – How to autorun a Python script using systemd. How to create own systemd service using Python script. How to configure Python script to start as systemd. How to manage Python service with systemctl?

Use this tutorial to run your Python script as system service under systemd. You can easily start, stop or restart your script using systemctl command. This will also enable to autorun Python script on system startup.

Step 1 – Dummy Python Application

First of all, I have used a dummy Python script which listens on a specified port. Edit a Python file as following

sudo vi /usr/bin/dummy_service.py

and add following content for dummy, You can use your own Python script as per requirements.

Step 2 – Create Service File

Now, create a service file for the systemd as following. The file must have .serviceextension under /lib/systemd/system/ directory

sudo vi /lib/systemd/system/dummy.service

and add the following content in it. Change Python script filename ad location. Also update the Description.

[Unit]
Description=Dummy Service
After=multi-user.target
Conflicts=getty@tty1.service

[Service]
Type=simple
ExecStart=/usr/bin/python3 /usr/bin/dummy_service.py
StandardInput=tty-force

[Install]
WantedBy=multi-user.target

Step 3 – Enable Newly Added Service

Your system service has been added to your service. Let’s reload the systemctl daemon to read new file. You need to reload this deamon each time after making any changes in in .service file.

sudo systemctl daemon-reload

Now enable the service to start on system boot, also start the service using the following commands.

sudo systemctl enable dummy.service
sudo systemctl start dummy.service

Step 4 – Start/Start/Status new Service

Finally check the status of your service as following command.

sudo systemctl status dummy.service

Use below commands to stop, start and restart your service manual.

sudo systemctl stop dummy.service          #To stop running service 
sudo systemctl start dummy.service         #To start running service 
sudo systemctl restart dummy.service       #To restart running service


https://www.lifewithpython.com/2014/01/python-add-directories-to-path-to-import-libraries-from.html


Python Tips:ライブラリ読み込み対象ディレクトリを追加したい

Python で特定のディレクトリをライブラリ読み込み対象パスに追加する方法をご紹介します。

Python でライブラリを読み込むディレクトリは sys.path の中にリストアップされています。
import sys
print sys.path  # => パスの一覧を格納したリスト

sys.pathの結果としてはsys.pathに登録されているすべてのディレクトリが表示されます。

['C:\\dev\\project\\aipscm_slct_front_param', 'C:\\Program Files\\JetBrains\\PyCharm Community Edition 2018.3.3\\helpers\\pydev', 'C:\\dev\\project\\aipscm_slct_front_param', 'C:\\Program Files\\JetBrains\\PyCharm Community Edition 2018.3.3\\helpers\\third_party\\thriftpy', 'C:\\Program Files\\JetBrains\\PyCharm Community Edition 2018.3.3\\helpers\\pydev', 'C:\\ProgramData\\Miniconda3\\lib\\site-packages', 'C:\\ProgramData\\Miniconda3\\lib\\site-packages\\win32', 'C:\\ProgramData\\Miniconda3\\lib\\site-packages\\win32\\lib', 'C:\\ProgramData\\Miniconda3\\lib\\site-packages\\Pythonwin', 'C:\\dev\\project\\aipscm_slct_front_param'] <- 新しく追加されたPath

このリストに、通常のリストと同じやり方でディレクトリを追加すると、そのディレクトリが読み込み対象に追加されます。

サンプルです。

import sys

# /Users/username/Desktop ディレクトリを import の探索パスに追加
sys.path.append("/Users/username/Desktop")

# /Users/username/Desktop/mylib.py が読み込める
import mylib

末尾に追加する .append() でもいいですし、末尾以外の場所に追加する .insert() も使用可能です。


例えば、この方法を使ってスクリプトが入っているディレクトリを追加したい場合は次のようにします。
import sys
import os

sd = os.path.dirname(__file__)
sys.path.append(sd)


※参考

import os # ファイルのパス file = __file__ # ファイルの絶対パス abspath = os.path.abspath(__file__) # 現在ファイルディレクトリから1個上のディレクトリ n1 = os.path.dirname(os.path.abspath(__file__)) # 現在ファイルディレクトリから2個上のディレクトリ BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))

以上です。


https://medium.com/dailyjs/named-and-optional-arguments-in-javascript-using-es6-destructuring-292a683d5b4e


Named and Optional Arguments in JavaScript

Parse your arguments more cleanly using ES6 destructuring

Jim Rottinger
Oct 17, 2016 · 4 min read

Destructuring is perhaps the biggest syntactical change that came to JavaScript in the ES6 specification. While the new syntax may seem weird to many long-time JavaScript programmers, once you are able to wrap your head around it, using it can be very powerful.

If you are not yet familiar with destructuring, it is the ability to map an object literal or array literal to multiple assignment statements at the same time. For example:

Destructuring can be used with array syntax to assign each value in the array to the variable name in the corresponding position of the left-hand side array.

More commonly, however, it can be used with object literals to pull out the properties of the object you want to use as variables.

Note — In the above object literal example, the property names must be the same. Position does not matter here unlike the array example.

If you want to rename something during destructuring, you can use the keyName:newKeyName syntax on the left hand side of your destructuring.

At first, this may just seem like some syntactic sugar on assignment statements, but what makes it much more than that is the ability to assign default values to the variables at the time of destructuring.

This is pretty significant. Say that the right-hand side of the assignment expression is not an object literal but a call to function in the other part of your application. One day, a developer comes along and implements a short-circuiting return statement in that other function and now your call isn’t getting the expected response. Being able to set up defaults at the time of the assignment makes safeguarding your code much easier.

Destructuring Function Parameters

When you pass an argument to a function, before that function begins executing, it assigns the argument you passed in to the corresponding parameter in its function signature. Since it is an assignment statement, that means we can use destructuring to do assign parameters values in a function!

Just as was shown before, we can also rename our keys during destructuring.

Lastly, we can assign default values to both the individual keys in our destructuring statement and the entire block itself.

While this seems a bit tedious to type out, it prevents us from having to check for the existence of every single argument we pass in.

Named and Optional Arguments

If you recall the first example of assigning default values during destructuring and combine that with what we learned in the last section, you might know where I’m going with this. If you can destructure function parameters, and you can assign default values during destructuring, AND the object literal names have to match during the destructuring process, this means that you can have named and optional parameters in your function signature! (so long as you use destructuring). Here is an example:

Conclusion

Hopefully the title of this article was not too misleading. While we do not yet have true named and optional arguments in Javascript in the same way C# does, I have just demonstrated a way to get functionally equivalent behavior using ES6 destructuring. While I do not foresee this pattern replacing positional arguments, it is very nice for situations such as receiving an object in a callback to a promise when making a network call to the server. Instead of hoping the network call returns exactly what you are expecting it to, you can use the pattern described in this post to explicitly define what you are expecting to receive and set up defaults for those values.

Let me know what you think in the comments below!

'C Lang > JS Technic' 카테고리의 다른 글

Airbnb JavaScript 스타일 가이드() {  (0) 2019.10.28
객체와 변경불가성(Immutability)  (0) 2019.10.28
React란 무엇인가  (0) 2019.06.26
삽입정렬, 합병정렬, 선택정렬, 퀵정렬 시간비교  (0) 2019.06.18
MVC패턴  (0) 2019.06.04


https://howto.lintel.in/python-__new__-magic-method-explained/



Python: __new__ magic method explained

Python is Object oriented language, every thing is an object in python. Python is having special type of  methods called magic methods named with preceded and trailing double underscores.

When we talk about magic method __new__ we also need to talk about __init__

These methods will be called when you instantiate(The process of creating instance from class is called instantiation). That is when you create instance. The magic method __new__ will be called when instance is being created. Using this method you can customize the instance creation. This is only the method which will be called first then __init__ will be called to initialize instance when you are creating instance.

Method __new__ will take class reference as the first argument followed by arguments which are passed to constructor(Arguments passed to call of class to create instance). Method __new__ is responsible to create instance, so you can use this method to customize object creation. Typically method __new__ will return the created instance object reference. Method __init__ will be called once __new__ method completed execution.

You can create new instance of the class by invoking the superclass’s __new__ method using super. Something like super(currentclass, cls).__new__(cls, [,….])

Usual class declaration and instantiation

A class implementation with __new__ method overridden

OutPut:

Note:

You can create instance inside __new__  method either by using superfunction or by directly calling __new__ method over object  Where if parent class is objectThat is,

instance = super(MyClass, cls).__new__(cls, *args, **kwargs)

or

instance = object.__new__(cls, *args, **kwargs)

What is different __new__ and __init__?

__new__
__init__
called timing

It is called before creating an instance

It is called after creating an instance
role

Instancify self object
Creating an instance

Initiate self object

parameters

It has cls as the first parameters


It has self as the first parameters


Things to remember

If __new__ return instance of  it’s own class, then the __init__ method of newly created instance will be invoked with instance as first (like __init__(self, [, ….]) argument following by arguments passed to __new__ or call of class.  So, __init__ will called implicitly.

If __new__ method return something else other than instance of class,  then instances __init__ method will not be invoked. In this case you have to call __init__ method yourself.

Applications

Usually it’s uncommon to override __new__ method, but some times it is required if you are writing APIs or customizing class or instance creation or abstracting something using classes.

SINGLETON USING __NEW__

You can implement the singleton design pattern using __new__ method. Where singleton class is a class that can only have one object. That is, instance of class.

Here is how you can restrict creating more than one instance by overriding __new__

It is not limited to singleton. You can also impose limit on total number created instances

 

CUSTOMIZE INSTANCE OBJECT

You can customize the instance created and make some operations over it before initializer __init__  being called.Also you can impose restriction on instance creation based on some constraints

 

Customize Returned Object

Usually when you instantiate class it will return the instance of that class.You can customize this behaviour and you can return some random object you want.

Following  one is simple example to demonstrate that returning random object other than class instance

Output:

Here you can see when we instantiate class it returns  3 instead of instance reference. Because we are returning 3 instead of created instance from __new__ method. We are calling __init__ explicitly.  As I mentioned above, we have to call __init__ explicitly if we are not returning instance object from __new__ method.

The __new__ method is also used in conjunction with meta classes to customize class creation

Conclusion

There are many possibilities on how you can use this feature.  Mostly it is not always required to override __new__ method unless you are doing something regarding instance creation.

Simplicity is better than complexity. Try to make life easier use this method only if it is necessary to use.


'C Lang > Python Basic' 카테고리의 다른 글

python에서 custom exception 개발하고 사용하기  (0) 2019.09.27
deep dive in python super() method  (0) 2019.09.25
Python class inherits object  (0) 2019.07.07
python naming convention used by Google  (0) 2019.07.07
set(집합)  (0) 2019.07.07

https://stackoverflow.com/questions/4015417/python-class-inherits-object


1006

Is there any reason for a class declaration to inherit from object?

I just found some code that does this and I can't find a good reason why.

class MyClass(object):
    # class code follows...
  • 1
    This creates a new-style class. – SLaks Oct 25 '10 at 14:20
  • 87
    The answer to this question (while simple) is quite difficult to find. Googling things like "python object base class" or similar comes up with pages and pages of tutorials on object oriented programming. Upvoting because this is the first link that led me to the search terms "old vs. new-style python objects" – vastlysuperiorman Dec 22 '15 at 20:42
515

Is there any reason for a class declaration to inherit from object?

tl;dr: In Python 3, apart from compatibility between Python 2 and 3, no reason. In Python 2, many reasons.


Python 2.x story:

In Python 2.x (from 2.2 onwards) there's two styles of classes depending on the presence or absence of object as a base-class:

  1. "classic" style classes: they don't have object as a base class:

    >>> class ClassicSpam:      # no base class
    ...     pass
    >>> ClassicSpam.__bases__
    ()
  2. "new" style classes: they have, directly or indirectly (e.g inherit from a built-in type), object as a base class:

    >>> class NewSpam(object):           # directly inherit from object
    ...    pass
    >>> NewSpam.__bases__
    (<type 'object'>,)
    >>> class IntSpam(int):              # indirectly inherit from object...
    ...    pass
    >>> IntSpam.__bases__
    (<type 'int'>,) 
    >>> IntSpam.__bases__[0].__bases__   # ... because int inherits from object  
    (<type 'object'>,)

Without a doubt, when writing a class you'll always want to go for new-style classes. The perks of doing so are numerous, to list some of them:

  • Support for descriptors. Specifically, the following constructs are made possible with descriptors:

    1. classmethod: A method that receives the class as an implicit argument instead of the instance.
    2. staticmethod: A method that does not receive the implicit argument self as a first argument.
    3. properties with property: Create functions for managing the getting, setting and deleting of an attribute.
    4. __slots__: Saves memory consumptions of a class and also results in faster attribute access. Of course, it does impose limitations.
  • The __new__ static method: lets you customize how new class instances are created.

  • Method resolution order (MRO): in what order the base classes of a class will be searched when trying to resolve which method to call.

  • Related to MRO, super calls. Also see, super() considered super.

If you don't inherit from object, forget these. A more exhaustive description of the previous bullet points along with other perks of "new" style classes can be found here.

One of the downsides of new-style classes is that the class itself is more memory demanding. Unless you're creating many class objects, though, I doubt this would be an issue and it's a negative sinking in a sea of positives.


Python 3.x story:

In Python 3, things are simplified. Only new-style classes exist (referred to plainly as classes) so, the only difference in adding object is requiring you to type in 8 more characters. This:

class ClassicSpam:
    pass

is completely equivalent (apart from their name :-) to this:

class NewSpam(object):
     pass

and to this:

class Spam():
    pass

All have object in their __bases__.

>>> [object in cls.__bases__ for cls in {Spam, NewSpam, ClassicSpam}]
[True, True, True]

So, what should you do?

In Python 2: always inherit from object explicitly. Get the perks.

In Python 3: inherit from object if you are writing code that tries to be Python agnostic, that is, it needs to work both in Python 2 and in Python 3. Otherwise don't, it really makes no difference since Python inserts it for you behind the scenes.


'C Lang > Python Basic' 카테고리의 다른 글

deep dive in python super() method  (0) 2019.09.25
Python: __new__ magic method explained  (0) 2019.07.08
python naming convention used by Google  (0) 2019.07.07
set(집합)  (0) 2019.07.07
class 정리 - 추상클래스(abstract class)  (0) 2019.06.14

+ Recent posts